Reactive pedestrian path following from examples

Size: px
Start display at page:

Download "Reactive pedestrian path following from examples"

Transcription

1 1 Introduction Reactive pedestrian path following from examples Ronald A. Metoyer 1, Jessica K. Hodgins 2 1 School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR 97330, USA metoyer@cs.orst.edu 2 Robotics Institute, Carnegie Mellon University, Pittsburgh, PA , USA jkh@cs.cmu.edu Published online: 18 November 2004 c Springer-Verlag 2004 Architectural and urban planning applications require animations of people to present an accurate and compelling view of a new environment. Ideally, these animations would be easy for a non-programmer to construct, just as buildings and streets can be modeled by an architect or artist using commercial modeling software. In this paper, we explore an approach for generating reactive path following based on the user s examples of the desired behavior. The examples are used to build a model of the desired reactive behavior. The model is combined with reactive control methods to produce natural 2D pedestrian trajectories. The system then automatically generates 3D pedestrian locomotion using a motion-graph approach. We discuss the accuracy of the learned model of pedestrian motion and show that simple direction primitives can be recorded and used to build natural, reactive, path-following behaviors. Key words: Animation Pedestrian simulation Reactive control Three-dimensional (3D) models of architectural and urban designs are increasingly used to visualize expensive construction concepts before full production, to plan complex urban areas, and to visualize pedestrian evacuation patterns. Architects and engineers evaluate options during the design process with 2D simulations and 3D visualizations. Land developers use visualizations to produce compelling marketing graphics and contractors use them to communicate to construction estimators, sub-contractor bidders, and building owners [47]. These visualizations can save time and money for those involved in the construction and urban planning process. To identify potential problems, the scenes should be produced with accurate structure models as well as realistic human inhabitants. Presently, these scenes often do not include human agents because natural human motion is difficult to create. The animation of high-level behaviors of humans is particularly difficult and time consuming to produce. For example, a scene with human characters in a courtyard would require that the user generate locomotion for each character taking into account collision avoidance (Fig. 1). For a scene of even ten characters, this task becomes difficult. Animators and programmers have developed skills and techniques for generating strikingly realistic human characters. Unfortunately, those who wish to generate animated figures are often not experts in animation or computer programming. We are interested in generating natural reactive path planning by building on user expertise in human navigation strategies. In this paper, we develop a system that allows novice users to control the reactive path planning of human characters. We present an approach that computes most of the character motion automatically while still giving the user control over the resulting animation. While the gross character motion is specified by the user via desired paths, the fine details of the navigation, such as obstacle avoidance and path following, are implemented with automatic reactive navigation techniques. The user can refine the motion by directing the characters with navigation primitives. The system uses this direction along with other information about the scene to build a model of desired reactive behavior for use in similar situations. The reactive navigation, usersupplied paths and learned model are combined to produce 2D time-stamped trajectories. We ultimately need 3D motion that tracks the 2D trajectories. We use a motion-graph algorithm that pieces together The Visual Computer (2004) 20: Digital Object Identifier (DOI) /s z

2 636 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples 1 2 Fig. 1. Architectural visualization of a crowd scene in a model of the College of Computing at the Georgia Institute of Technology. Models provided by the Imagine Lab at Georgia Tech Fig. 2. In our system, the user s goals are given in the form of desired paths and interactive direction. The interactive direction is used to build a model of desired reactive behavior. This user input is combined with the automatic reactive controller to generate 2D simulation trajectories that are then tracked by 3D characters using a motion-graph approach sequences of frames, or poses, to track the 2D trajectories in time and space while maintaining natural transitions (Fig. 2). We test the learned model quantitatively by performing cross-validation on a set of user direction examples. We also qualitatively compare the resulting 2D motion to that of models using only reactive control and models using random choice. We assess the quality of our 3D motion generation with trajectory tracking experiments. 2 Background Behavior control has been a topic of interest in several fields including computer graphics, robotics, and urban planning. Early interest in the computer graphics community was sparked by the seminal work of Reynolds that introduced the Boids model for flocks, schools, and herds [38]. Recently, Massive Software won an Academy Award for the behavioral animation software used to generate battle sequences with thousands of interacting characters [43]. Several others in the graphics area have made progress in creating realistic behaviors for human-like characters, fish, and dinosaurs [14, 36, 46, 48]. Most of these solutions have focused primarily on autonomous behavior. In 1995, Blumberg introduced a model for directing character behavior at multiple levels, giving the user the ability to control the behavior [8]. More recently, he presented a framework for designing a dog character that learns herding behaviors based on clicker training [7]. Our aim is also to add the human into the behavior control and training loop. In the area of intelligent agents, Barnes takes a more direct approach to character training by designing a virtual environment for interactive visual programming of agents. The user specifies preconditions,

3 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples 637 post conditions and the corresponding actions visually within the environment [4]. In other work, we focused on generating natural reactive sports behaviors via demonstration of full trajectories [30]. In this paper, as well as in our previous work, we aim to indirectly collect information about the user s desires in order to build a model of user preferences and produce motions that more closely match these preferences. Several researchers in the graphics area have focused specifically on generating pedestrian behaviors such as navigation planning and natural locomotion [9, 21, 24, 35, 44]. Others have focused on designing simulation environments for generating realistic, directable crowds [12, 32 34, 45]. Ashida and his colleagues analyze actual crowd video to build statistical models of higher level pedestrian behavior [3]. Goldenstein and his colleagues develop an approach grounded in dynamic systems theory to generate reactive behaviors and crowd behaviors for autonomous agents [16]. Pedestrian behavior modeling is also of interest in urban planning. Urban planning researchers build models of human behavior that are valuable tools for planning and design of urban areas such as shopping malls and city centers [5, 6, 10, 15, 27]. Pedestrian models have been developed at several levels of granularity ranging from coarse fluid flow motion models to fine grain inter-pedestrian interaction models such as that proposed by Helbing and Molnar [17 19]. Helbing s model of social interactions is based on attractive and repulsive potential fields very similar to those used by the roboticists for mobile robot control [2, 20, 23, 28]. Using their social forces model, Helbing and Molnar are able to recreate phenomena such as lane formation in halls, queuing, and turn-taking at doorways. More recently, Quinn and Metoyer implemented a parallel version of the social forces model that can simulate up to 10,000 pedestrians in real-time [37]. Others have used cellular automata approaches for modeling pedestrian motion at a similar granularity and have observed similar emergent behaviors [6, 10, 39]. Each of these approaches generates motion that is particularly useful for observing patterns and collecting statistics, but they typically generate roboticlooking motion that is not suitable for realistic visualization. Boston Dynamic s PeopleShop system also provides an interface for novices to design pedestrian content for 3D training and visualization scenes. They allow users to specify paths, sensor regions and behavior for characters in a terrain and allow for run-time control of characters using a joystick. Rather than design fully autonomous characters they implement intelligence amplification to build on the user s intelligence [11]. We are also interested in simplifying the content creation process for novices by providing simple interfaces and characters that improve performance with time. 3 2D Intelligence model To populate a 2D scene with animated pedestrians, the user first describes the motion on a 2D floor plan of the scene to be animated. To relieve the user of some of the tedious details involved with human navigation, we provide a low-level character intelligence model. Character intelligence provides a basic level of behavior for the character via reactive path following in the presence of obstacles, desired paths, and other pedestrians. Reactive control using potential fields is a well studied area in mobile robotics and in other fields such as pedestrian modeling. We use the social forces model of Helbing and Molnar to model the reactive intelligence of a pedestrian in a 2D representation of an architectural environment [18]. This approach defines obstacles as repulsive potentials, goals as attractive potentials, and combines all potentials to produce a composite potential field (Fig. 3). In 2D space, each pedestrian is modeled with point mass dynamics. The update equation for a point mass is x t+1 = x t + ẋ t dt f x m dt2, (1) where the force f x is obtained from the behavior control potential fields, m is the character s mass, and dt is the simulation time step. Similar equations hold for y. The velocity of each point-mass pedestrian is clamped at a limit of 2.0m/s to allow for fast walking. The social forces model described above provides the user with the ability to specify desired goal locations. In order to allow for more control over the path to the goal location, we provide the user with the ability to supply a natural path. People, as experts in navigating physical environments, can visualize and draw natural paths between two points in a scene

4 638 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples 3a 3b 3c 3d 4 Fig. 3. a and b Attractive and repulsive fields centered on the circle. c A repulsive pedestrian field moving in the positive x direction. d A boundary field caused by the wall Fig. 4. Snapshot of the user interface. The dashed lines represent boundaries that the pedestrians are aware of. The squares represent obstacles. The user supplied natural path is the general path the user wants the pedestrian to take, while avoiding all collisions along the way. The path is only shown for the currently selected pedestrian in the absence of moving obstacles. The user supplies these paths by drawing directed lines across the floor plan of the environment (Fig. 4). These paths can also be generated using global path planning algorithms such as visibility roadmaps [25]. The paths provide general path guidelines while the reactive control accounts for potential collisions along these paths. The user-supplied paths are converted into forces: f path = k p ( p d max f (d max) ( + 1 p ) d max ) f (s), (2) where f (s) is a unit vector along the path starting at the nearest point on the path, f (d max) is a unit vector perpendicular to f (s), p is the perpendicular

5 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples 639 Fig. 5. Force diagram for a character at several points in time along the user-specified natural path. At t A the character is on the path. At t B the character has been forced off course and must return to the path. As distance from the path increases (at t B ), the force direction begins to point towards the path. When the distance is small (at t A ), the force direction is aligned with the path distance from the nearest point on the path, d max is the largest allowed perpendicular distance, and k p is a path gain. The force on the pedestrian is based on the pedestrian s perpendicular distance from the spline that represents the desired natural path that it is following (Fig. 5). These reactive models provide a simple system for the user to populate an environment with pedestrians and direct them from one point to another within the environment. 4 Behavior models from direction primitives The intelligence model alone shouldproducecorrect 2D motion in terms of avoiding the defined obstacles and reaching goals, but will not necessarily produce natural motion for navigating complex scenes. For example, the characters will typically proceed along a path until a repulsive force causes a sudden direction change, while in real life, path adjustments to avoid collisions are often very subtle (Fig. 6). When the motion resulting from the intelligence model does not meet the user s goals, the user is able to interactively direct the character with navigation primitives. As a 2D pedestrian simulation progresses in time, potential collisions may arise and the user is visually alerted to the situation. The user can then stop the simulation and provide direction from the following set of navigation primitives: yield, cut-in-front, go-around-right, go-around-left, and no-action. The user observes the motion and accepts the direction and continues in the animation process or the user revises the direction. The set of navigation primitives was chosen based on research from the traffic planning field. There are five main tasks a pedestrian undertakes while navigating: monitoring, yielding, checkerboarding, streaming, and avoiding perceptual objects [13]. Checkerboarding occurs in a lane when one person following another person does so at a slightly offset position to the left or right in order to avoid stepping on the person in front (and to see around), creating a checkerboard or zipper pattern of pedestrians. Streaming is the act of following directly behind someone in a crowded situation as they create a clear path. Avoiding perceptual objects refers to the avoidance of something that is not truly a physical object that could cause a collision. For example, lines on the pavement are considered perceptual objects. In this paper, we are concerned with two of these tasks, monitoring and yielding, because these are relevant to pedestrian-pedestrian collision avoidance. Monitoring refers to the act of observing pedestrians in the nearby area to determine their navigation intentions. Yielding refers to the act of adjusting velocity (magnitude or direction) in order to avoid a potential collision. The yield primitive is designed to alter the velocity of the pedestrian slightly so that it allows another pedestrian to pass safely in front. The system chooses a target point along the pedestrian s desired natural path before the predicted collision point. A new desired velocity is computed that will put the pedestrian at this point at the predicted time of impact (Fig. 7). Once the collision danger has passed, the pedestrian resumes his original desired velocity. The cut-in-front primitive is implemented in a similar manner, choosing a target point ahead of the predicted point of impact. The go-around primitive is designed to generate an alteration to the desired path that takes the pedes-

6 640 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples Fig. 6. The social forces model alone produces interactions that are often robotic and unnatural. In this example, the characters get very close before taking actions to avoid the collision. Although this type of near miss does happen in real life, it is much less common than what we see with the social forces model, especially in uncrowded situations Fig. 7. The circles with arrows represent pedestrians and their velocities. The yield navigation primitive chooses a velocity that will allow the pedestrian to have a 0.75 m cushion before the potential collision. The cut-in-front primitive chooses a velocity that will give the pedestrian a 2.25 m cushion beyond the potential collision spot Fig. 8. The circles with arrows represent pedestrians and their velocity. The colliding pedestrian is shown approaching from both the left side and the right side (A and B) to demonstrate the two computed avoidance targets for the around-right primitive. The around-right navigation primitive chooses a desired path offset based on the approach angle of the potentially colliding pedestrian. If the pedestrian is approaching from the left, as in A, the offset is chosen with respect to the collision point on the desired path. If the pedestrian is approaching from the right, as in B, the offset is chosen with respect to that pedestrian himself trian around the potential collision spot and back to the natural path. The path around the collision depends on the relative travel direction of the other pedestrian. An offset position is chosen based on the other pedestrian s approach angle and a linear path through this position and back to the natural path makes up the path adjustment (Fig. 8). A similar procedure produces a path for going around to the left. 4.1 Generalizing direction examples One of our goals in this work is to ease the burden on a novice animator who is designing a scene. Rather than discard the novice user s direction examples, we use them to guide the character in future situations. Generalization of direction primitives requires that the system not only record the direction primitives themselves, but also the features of the scene. The

7 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples 641 feature vector describes the aspects of the scene that affect the path planning of the character and computation of these features represents a pedestrian s monitoring behavior. Ideally, features are fully descriptive of the situation while easy and fast to compute. A pedestrian s motion at any point in time may depend on several obstacles or several other pedestrians as well as the architectural situation (wide or narrow hall, etc.). We have chosen a set of seven discrete features to describe the situation of a pedestrian in an urban scene. The seven features are: Is the path around left blocked by other pedestrians or obstacles (Y or N)? Is the path around right blocked by other pedestrians or obstacles (Y or N)? Relative speed of the colliding pedestrian (5). Approach direction of the colliding pedestrian (8). Colliding pedestrian s distance to collision (5). Pedestrian s distance to collision (5). Desired travel direction (3). The numbers in parentheses represent the number of discretized values for each feature. Speed and direction are clearly important when trying to negotiate a collision. The blocked right and blocked left features allow us to account for potential collisions aside from the one under consideration. A single value for each of these variables makes up a feature vector. We experimented with both a naive Bayes approach and decision tree approach for learning the behavior model from user examples. The four primitives described above (around-left, around-right, yield, letpass) and a fifth, no-action primitive, are used as the choices or hypotheses for the naive Bayes classifier while the seven variables above make up the features used as input to the naive Bayes classifier. We treat potential collision situations as a choice. The system must classify potential collisions as one of five possible alternatives represented by the direction primitives. The naive Bayes classifier is defined as H max = argmaxp(h i ) h i H j P(a j h i ), (3) where h i are members of H, the set of five possible actions and a j represents the set of seven attributes, or features, described above. The probability of a particular hypothesis, h i is P(h i ) = n h i N, (4) where N represents the number of examples seen thus far and n hi represents the number of examples where theith primitive was chosen. The conditional probabilities for the attributes can be computed by counting the occurrences of each attribute given each particular hypothesis. To avoid a biased underestimate, we compute the m-estimate for estimating the conditional probabilities [31]. We have also experimented with the C4.5 decision tree algorithm for modeling the decision making process of the pedestrian characters [31]. Decision trees are particularly appealing because we have discrete valued primitive choices. Decision trees are also known to be robust to errors in the training data. This property is desirable because the user may give a direction example that contradicts a previous direction example. Decision trees are also interesting because they can be represented as sets of if-then rules resulting in human-readable rules for pedestrian reactive behavior. In the decision tree, the nodes represent tests of one of the seven attributes or features described above. A node has a child for each possible value of the feature. The leaf nodes represent decisions. A decision tree classifies an instance by sorting the instance down the tree to a leaf node. Each path to a leaf represents a conjunction of feature tests and the entire tree represents a disjunction of these conjunctions. The tree is built by a top-down greedy search that chooses the best feature to test at the root and each subsequent level. Best is defined as that which results in the most information gain [31]. We will discuss the results of both the naive Bayes and the decision tree model in Sect Run-time behavior Using the naive Bayes probabilistic model we can now compute the probabilities of the five possible decisions or hypotheses: go-around-left, go-aroundright, yield, cut-in-front, and no-action. The hypothesis with the highest probability is chosen. When using the decision tree approach, we can simply provide the features to the algorithm, which then produces a single output representing one of the five

8 642 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples decision choices. The model can be used in later situations to reduce the amount of direction necessary from the user. At run time, the pedestrians compute potential collisions by observing the velocity of nearby pedestrians to determine future locations. When a potential collision arises, the pedestrian consults the behavior model to determine the proper navigation action to take. The chosen action runs to completion or until another possible collision occurs. The reactive intelligence model is still used to accountfor any collisions not avoided by the learned model and to track the desired path. Both the learned model and the reactive model produce forces that are summed together to generate a final force for the pedestrian. 5 3D Motion generation For visual display, we need 3D motion that tracks the trajectories from the 2D simulation. We first capture multiple locomotion sequences such as walking straight at a comfortable and slow speed, turning by various amounts, starting and coming to a complete stop. Due to limitations of our motion capture system, we were able to capture only short segments of motion in a 9 9 foot capture region. We use a motion-graph-based approach to order the motion capture sequences [1, 22, 26, 29, 40 42]. First, we build a transition matrix that defines a distance, in pose space, between any two poses in our set of motion sequences. We then use a beam-search combined with a cost function to determine the sequence of poses that tracks the trajectory while maintaining natural transitions. 5.1 Motion capture data We use approximately 22 motion capture sequences to generate the 3D trajectory tracking motion for each pedestrian in the urban scenes. The sequences are sampled at 30 Hz and stored as individual frames of motion or poses. A pose is the set of 14 joint angles and the global position and orientation of the root node, the pelvis (Fig. 9). The yaw information of the root node is extracted so that global orientation is defined without respect to facing direction on the ground plane. The poses are stored along with relative position and orientation offsets of the root body between that pose and the previous pose. The offset information allows us to begin any sequence at an arbitrary location and Fig. 9. A single frame of motion capture data is stored as a pose. Our pose consists of 14 joint angles as well as the position and orientation of the root body and pelvis orientation on the ground plane. The offsetpositions are computed as p = p i p i 1,wherep i is the root body s world translation for pose i. The offset orientation is computed as the included angle between two successive global orientations of the root body, M = M T i 1 M i. Both the offset angle of the root body and joint angles can be represented as a vector with a direction that represents the axis of rotation and magnitude that represents the amount of rotation. A single pose is described by 16 vectors: 14 joint angles, the root body orientation offset, and the root body translation offset. A single motion capture sequence is replayed by placing the character at a position and yaw angle on the ground plane and subsequently placing consecutive poses of the sequence. The root position and orientation are updated with the offsets. The joint angles are used directly for each pose. We can now reorder the poses to produce new sequences of walking motion. The goal is to produce a sequence of poses that takes the pedestrian along a predefined 2D trajectory while maintaining natural transitions from pose to pose. 5.2 Pose transition matrix To determine a resequenceing of the motion capture pose data that will track the 2D trajectory while making smooth transitions, we need a measure of the quality of a transition from one pose to another. We first compute the distance between poses i and j

9 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples 643 N D ij = p ik p jk, (5) k=0 where N is the number of joints in the pose and p i is a vector representing the ith joint angle. Next, we compute a transition matrix that holds the cost of transitioning from any pose to any other pose of the motion capture data: T ij = 0.5D i, j D i+1, j. (6) This transition matrix holds the transition costs for every pose of motion capture data to every other pose, regardless of which sequence (straight, slow, curved, etc.) the pose came from. Transitions with values above a threshold are pruned. After pruning, the matrix may contain dead ends, or poses from which there are no reasonable transitions. Transitions to dead ends are given high values. Figure 10 shows the transition from a straight walking sequence to a turning sequence. 5.3 Transition searching Once the transition matrix has been computed, generating the trajectory tracking motion becomes a search problem on the graph represented by the transition matrix. The problem is to find the path through this graph that produces natural transitions (small transition costs) while also producing a trajectory for the root body that minimizes distance from the desired 2D trajectory in space and time. To include this 2D trajectory tracking error in the transition computation, we define a cost function Cost ij = w t T ij + w p Error tra j, (7) where w t and w p represent weights for the transition and trajectory error components and Error tra j represents the trajectory tracking error of the root body (pelvis) when taking the transition from i to j. We use a beam search that is similar to a breadth first search except that at each step, it keeps only the m best paths through the graph to this point and discards the others. This algorithm will not produce optimal paths but is a tractable approach that produces reasonable paths. Figure 11 shows the result of a scene in which one character chooses to avoid the collision by going around the other pedestrian to the right Fig. 10. The top character is walking along a straight path. The bottom character is walking a curved path. Imagine a trajectory in which the character must walk straight for 10 m, turn right, and walk straight again. The character must choose a point at which to transition from the straight sequence to the turning sequence in order to minimize error in tracking as well as maintain smooth motion transitions Fig. 11. This scene shows two characters initially on a collision course. In the 2D simulation, one pedestrian chose to go-around-right in order to avoid the collision. This decision resulted in a 2D trajectory different from its desired path shown with the arrow. The path adjustment is subtle but effective in avoiding the collision. The resulting 3D motion consists of poses from several example navigation sequences including walking straight, turning right, and turning left in order to get back to the original desired path. (To view the movie for this scene, see 6 Results We measure performance of the system both quantitatively and qualitatively. A user provided a total of 146 direction examples over several scenes. Each example consists of the feature vector for that particular example and the user s specified action. The

10 644 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples Fig. 12. Filmstrip of the tracking performance of four 3D pedestrian characters. Images are viewed from left to right and top to bottom. The spheres represent the character s desired location every 10 frames. The motion of each character spans approximately 10 meters. The resulting motion is a sequence of poses from several motion capture sequences where each sequence spanned no more than approximately 4 m. Transitions between poses are determined automatically by the search over the transition graph learned model from each scene was carried over to the next scene. To evaluate the generalization of the learned behavior model, we compute a 10-fold cross validation over the entire data set of featureaction pairs. A correct classification is one which produces an action (go-around-left, yield, etc.) that agrees with the user s desired input for the particular example feature vector being classified. We perform this test for two learning algorithms: a naive Bayes classifier and the C4.5 decision tree algorithm. A random guess of the correct navigation primitive would result in 20% accuracy. In our experiments, the naive Bayes classifier produced a 76% accuracy rate while the C4.5 decision tree algorithm outperformed naive Bayes, resulting in 92% accuracy rate. We believe that the success of the decision tree is due to the robustness to contradictory input from the user. It is clear from observation of our data that users do provide conflicting examples, especially in navigation situations where two possible primitives will result in a reasonable reaction. For example, in some situations, it may be perfectly natural to either yield or go-around-left, while in other situations, one primitive is clearly a better choice than the other. We ran several tests to determine the accuracy of the 3D tracking and the transitions (Fig. 12). As curvature increases, tracking accuracy decreases because we have fewer good examples of curved walking than we have of straight walking. Table 1 shows the average pose error, path error and combined error for various trajectories with curvatures ranging from straight to a corner turn (turn 3). Our tests were run with approximately 3500 frames, or two minutes of Table 1. Errors for the 3D trajectory tracking pedestrian. The path and pose errors are computed for various trajectory curvature values ranging from straight to a corner turn (turn3). Pose error represents the average transition cost (Eq. 6) for the duration of the motion. The path error represents the average distance, in meters, of the character s root node (pelvis) from the time-stamped samples of the path for the duration of the motion. In general, as curvature increases, both pose and path errors increase Trajectory tracking errors Curvature straight turn1 turn2 turn3 Pose Path Total

11 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples 645 We also qualitatively compared the motion of a crowd using the learned model to a crowd using a random choice model and to a crowd using just a reactive model. In the learned model scene, all pedestrians used the probabilistic naive Bayes model combined with the reactive model to navigate the environment. In the random scene, all pedestrians made random primitive choices to navigate the environment. In the reactive control scene, all pedestrians used only the social forces model to navigate the environment. The pedestrians using the learned model produced much more natural motion than the random choice pedestrians and more natural motion than pedestrians using only the social forces navigation model (Fig. 13 and Fig. 14). 7 Discussion Fig. 13. In these images, the pedestrian s velocity vector has been extended for clarity. A pair of white vectors indicate that two pedestrians are in danger of colliding while a pair of gray vectors indicates that two pedestrians are on a less-urgent collision course. Black vectors indicate no apparent danger of a collision. The top image shows a snapshot of a simulation using only the social forces model to guide the motion of the pedestrians. The middle image shows the same point in time for a system using a random choice of navigation primitives for potential collisions. The bottom image represents a simulation using the social forces model combined with the learned naive Bayes probabilistic model. (To view the movie for this scene, see motion data. The lack of data results in errors in tracking as well as poor transitions in some cases. We believe that with more curved walking sequences, the algorithm performance would improve tracking and transitions for curved trajectories. We have used our approach to animate pedestrians in a 3D visualization of an architectural scene with simple user input while maintaining user control (Fig. 1). In 2D, we allow the user to populate and direct a simple representation of pedestrians in an architectural scene. We further utilize the user s direction to build a model of character behavior for future similar situations. In 3D, we have generated automatic trajectory tracking for articulated 3D pedestrian characters. Although we have chosen a small set of direction primitives, we are able to demonstrate the utility of using this direction rather than discarding it after the animation is produced. When the user must animate a new scene with similar conditions, the model will, in many cases, produce the correct behavior, reducing the number of times the user will have to provide direction. We have demonstrated the utility of learning the desired behaviors using both the naive Bayes and C4.5 decision tree learning algorithms. Our examples have typically shown pedestrian characters in a scene, allowing us to show a reasonable sized crowd that is not too congested. Without congestion, the pedestrians enter situations where they must make strategic decisions about navigating. As scenes get more congested, fewer options exist and navigation strategy becomes more reactive. In our experience, the social forces model alone produces fairly natural results for large scenes with more congestion because most motion is reactive within a very small planning area.

12 646 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples Fig. 13. A filmstrip of three simulations at the same points in time. In these images, the pedestrian s velocity vector has been extended for clarity. A pair of white vectors indicate that two pedestrians are in danger of colliding while a pair of gray vectors indicates that two pedestrians are on a less-urgent collision course. Black vectors indicate no apparent danger of a collision. The images on the left (from top to bottom) are snapshots of a simulation as it runs using only the social forces model for control. The middle images are snapshots of the simulation when making random choices for the navigation primitives. On the right are snapshots of the simulation using the learned behavior model. (To view the movie for this scene, see

13 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples 647 A useful result of our 3D motion approach is the automatic choice of proper sequences for velocity changes as well as path curvature changes. For example, if the 2D simulated character comes to a stop in order to avoid a collision and then continues along its path, the 3D algorithm produces a sequence of motion capture poses that brings the 3D character to a complete stop and eventually back to a forward walking sequence. The motion data and search structure enforce natural transitions because the actor who was recorded performed only natural motions and transitions between motions are only made between similar poses. Our 3D motion could be further improved by collecting more data. In the 3D crowd scene, for example, the animator created a situation where two characters met in the middle and appeared to be conversing. Unfortunately, we had no motion capture data of a character standing still, rather, the closest motion to standing still was a sequence of a person turning in place. Therefore, the characters appear to turn their backs on one another. This problem could be solved by adding relevant sequences to the database of motion. More data would also allow us to vary the motion between pedestrians. Currently, all pedestrians draw from the same motion library. We are currently investigating several related research projects. First, we have implemented a parallel version of Helbing s social forces model that can simulate up to 10,000 pedestrians in real-time on a multicomputer. We are also building a tabletop sketch-based interface for interacting with this simulation in real-time as it executes. We hope to use this table-based interface to provide a tool for domain experts, such as a security specialist, to explore what-if scenarios for important situations such as evacuation during an emergency. Our goal is to develop tools and techniques that allow novice users to produce compelling scenes of pedestrians in 3D environments for use in interactive training, visualization, and entertainment. Acknowledgements. The authors would like to thank Dr. Peter Molnar and the Georgia Tech College of Computing for the valuable input and support during the course of this work. References 1. Arikan O, Forsyth DA (2002) Interactive motion generation from examples. ACM Trans Graph 21: Arkin R (1987) Motor schema based navigation for a mobile robot. In: Proceedings of the 1987 IEEE Conference on Robotics and Automation, pp Ashida K, Lee S, Allbeck J, Sun H, Badler N, Metaxas D (2001) Pedestrians: creating agent behaviors through statistical analysis of observation data. In: Proceedings of Computer Animation, pp Barnes C (2000) Visual programming for virtual environments. In: Proceedings from 2000 AAAI Spring Symposium on Smart Graphics 5. Batty M, Jiang B, Thurstain-Goodwin M (1998) Local movement: Agent-based models of pedestrian flow. Center for Advanced Spatial Analysis Working Paper Series (4) 6. Blue VJ, Adler JL (2000) Cellular automata model of emergent collective bi-directional pedestrian dynamics. In: Artificial Life VII: Proceedings of the Seventh International Conference on Artificial Life, pp Blumberg BM, Downie M, Ivanov Y, Berlin M, Johnson MP, Tomlinson B (2002) Integrated learning for interactive synthetic characters. ACM Trans Graph 21(3): Blumberg BM, Galyean TA (1995) Multi-level direction of autonomous creatures for real-time virtual environments. In: Proceedings of SIGGRAPH 95, Annual Conference Series. Addison-Wesley, Boston, Choi MG, Lee J, Shin SY (2003) Planning biped locomotion using motion capture data and probabilistic roadmaps. ACM Trans Graph 22(2): Dijkstra J, Timmermans H (2000) Towards a multi-agent system for visualizing simulated behavior within the built environment. In: Proceedings of Design and Decision Support Systems in Architecture and Urban Planning Conference: DDSS 2000, pp Boston Dynamics (2000) Peopleshop Farenc N, Musse SR, Schweiss E, Kallmann M, Aune O, Boulic R, Thalmann D (2000) A paradigm for controlling virtual humans in urban environment simulations. Appl Artif Intell 14(1): Feurtey F (2000) Simulation of collision avoidance behavior for pedestrians. Dissertation, The University of Tokyo, School of Engineering 14. Funge J, Tu X, Terzopoulos D (1999) Cognitive modeling: knowledge, reasoning and planning for intelligent characters. In: Proceedings of SIGGRAPH 99, Annual Conference Series. Addison-Wesley, Boston, pp Gipps GP, Marksjo B (1985) A micro-simulation model for pedestrian flows. Math Comput Simul 27: Goldenstein S, Large E, Metaxas D (1999) Non-linear dynamical system approach to behavior modeling. Vis Comput 15(7/8): Helbing D (1992) A fluid dynamic model for the movement of pedestrians. Complex Syst 6: Helbing D, Molnar P (1995) Social force model for pedestrian dynamics. Phys Rev 51(5): Henderson LF (1974) On the fluid mechanics of human crowd motion. Transp Res 8: Khatib O (1986) Real-time obstacle avoidance for manipulators and mobile robots. Int J Robot Res 5(1): Ko H, Cremer J (1996) VRLOCO: real-time human locomotion from positional input streams. Presence 5(4):

14 648 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples 22. Kovar L, Gleicher M, Pighin F (2002) Motion graphs. ACM Trans Graph 21: Krogh B, Thorpe C (1986) Integrated path planning and dynamic steering control for autonomous vehicles. In: Proceedings of the IEEE Conference on Robotics and Automation, pp Kuffner JJ (1998) Goal-directed navigation for animated characters using real-time path planning and control. In: CAPTECH 98: Workshop on Modelling and Motion Capture Techniques for Virtual Environments. Springer, Berlin Heidelberg New York, pp Latombe J (1991) Robot motion planning. Kluwer, Norwell, MA 26. Lee J, Chai J, Reitsma PSA, Hodgins JK, Pollard NS (2002) Interactive control of avatars animated with human motion data. ACM Trans Graph 21: Lovas GG (1993) Modeling and simulation of pedestrian traffic flow. In: Modeling and Simulation: Proceedings of 1993 European Simulation Multiconference 28. Lyons D (1986) Tagged potential fields: An approach to specification of complex manipulator configurations. In: Proceedings of the IEEE Conference on Robotics and Automation, pp Metoyer R (2002) Building behaviors with examples. Dissertation, Georgia Institute of Technology 30. Metoyer R, Hodgins JK (2000) Animating athletic motion planning by example. In: Proceedings of Graphics Interface, pp Mitchell T (1997) Machine learning. McGraw-Hill, New York 32. Musse SR, Garat F, Thalmann D (1999) Guiding and interacting with virtual crowds in real-time. In: Proceedings of Eurographics: Computer Animation and Simulation 99. Springer, Berlin Heidelberg New York, pp Musse SR, Thalmann D (1997) A model of human crowd behavior: Group inter-relationship and collision detection analysis. In: Proceedings of Eurographics: CAS 97 Workshop on Computer Animation and Simulation, pp Musse SR, Thalmann D (2001) Hierarchical model for real time simulation of virtual human crowds. IEEE Trans Vis Comput Graph 7(2): Noser H, Renault O, Thalmann D, Magnenat-Thalmann N (1995) Navigation for digital actors based on synthetic vision, memory and learning. Comput Graph 19(1): Perlin K, Goldberg A (1996) Improv: a system for scripting interactive actors in virtual worlds. In: Proceedings of SIG- GRAPH 96, Annual Conference Series. Addison-Wesley, Boston, pp Quinn M, Metoyer R (2003) A parallel implementation of the social forces model. In: Proceedings of the Second International Conference in Pedestrian and Evacuation Dynamics, pp Reynolds CW (1987) Flocks, herds, and schools: a distributed behavioral model. In: Proceedings of SIGGRAPH 87, Annual Conference Series. ACM Press, New York, NY, USA, pp Schadschneider A (2002) Cellular automaton approach to pedestrian dynamics. In: Pedestrian and Evacuation Dynamics, Conference Proceedings, pp Schödl A, Essa I (2000) Machine learning for video-based rendering. In: Advances in neural information processing systems, vol 13. MIT Press, Boston 41. Schödl A, Essa I (2002) Controlled animation of video sprites. In: Proceedings of the First ACM Symposium on Computer Animation, pp Schödl A, Szeliski R, Salesin D, Essa I (2000) Video textures. In: Proceedings of SIGGRAPH 00, Annual Conference Series. Addison-Wesley, Boston, pp Massive Software (2004) Massive Sun H, Metaxas DN (2001) Automating gait generation. In: Proceedings of SIGGRAPH 01, Annual Conference Series. ACM Press, New York, NY, USA, pp Tecchia F, Loscos C, Conroy R, Chrysanthou Y (2001) Agent behavior simulator(abs): A platform for urban behaviour development. In: Proceedings of Games Technology Conference Tu X, Terzopoulos D (1994) Artificial fishes: physics, locomotion, perception, behavior. In: Proceedings of SIG- GRAPH 94, Annual Conference Series. ACM Press, New York, NY, USA, pp VISARC (1999) Professional visualization services for the building industry Webber B, Badler N (1995) Animation through reactions, transition nets and plans. In: Proceedings of the International Workshop on Human Interface Technology Photographs of the authors and their biographies are given on the next page.

15 R.A. Metoyer, J.K. Hodgins: Reactive pedestrian path following from examples 649 RONALD METOYER is an assistant professor in the School of Electrical Engineering and Computer Science at Oregon State University. In 2001 he received an NSF CAREER Award for research in computer animation. He currently leads the Interactive Graphics and Vision Lab (IGVL) along with his colleague, Dr. Eric Mortensen. His research focuses on creating interactive spaces for training and education. In particular, he investigates techniques for creating believable character motion and for making animated characters accessible to the novice user. Current projects include motion-capture-based locomotion, content-creation interfaces for novice users, and behavioral control. JESSICA HODGINS is an associate professor of Computer Science and Robotics at Carnegie Mellon University. From she was on the faculty of the College of Computing and the Graphics, Visualization and Usability Center at Georgia Tech. She received an NSF Young Investigator Award, a Packard Fellowship and a Sloan Foundation Fellowship. She was editor-in-chief of ACM Transactions on Graphics from and papers chair of SIGGRAPH Her research focuses on the coordination and control of dynamic physical systems, both natural and human-made, and explores techniques that allow robots and simulated humans to control their actions in complex and unpredictable environments. Ongoing projects include data-driven animation, simulation of human motion, animation interfaces for naive users, and measurements of human perception of animated motion.

Sketch-based Interface for Crowd Animation

Sketch-based Interface for Crowd Animation Sketch-based Interface for Crowd Animation Masaki Oshita 1, Yusuke Ogiwara 1 1 Kyushu Institute of Technology 680-4 Kawazu, Iizuka, Fukuoka, 820-8502, Japan oshita@ces.kyutech.ac.p ogiwara@cg.ces.kyutech.ac.p

More information

Crowd simulation. Taku Komura

Crowd simulation. Taku Komura Crowd simulation Taku Komura Animating Crowds We have been going through methods to simulate individual characters What if we want to simulate the movement of crowds? Pedestrians in the streets Flock of

More information

A Responsiveness Metric for Controllable Characters Technical Report CS

A Responsiveness Metric for Controllable Characters Technical Report CS A Responsiveness Metric for Controllable Characters Technical Report CS05-50-0 Madhusudhanan Srinivasan Ronald A. Metoyer School of Electrical Engineering and Computer Science Oregon State University ρ

More information

Adding Virtual Characters to the Virtual Worlds. Yiorgos Chrysanthou Department of Computer Science University of Cyprus

Adding Virtual Characters to the Virtual Worlds. Yiorgos Chrysanthou Department of Computer Science University of Cyprus Adding Virtual Characters to the Virtual Worlds Yiorgos Chrysanthou Department of Computer Science University of Cyprus Cities need people However realistic the model is, without people it does not have

More information

Simulation of Agent Movement with a Path Finding Feature Based on Modification of Physical Force Approach

Simulation of Agent Movement with a Path Finding Feature Based on Modification of Physical Force Approach Simulation of Agent Movement with a Path Finding Feature Based on Modification of Physical Force Approach NURULAQILLA KHAMIS Malaysia-Japan International Institute of Technology, Universiti Teknologi Malaysia,

More information

CS 231. Crowd Simulation. Outline. Introduction to Crowd Simulation. Flocking Social Forces 2D Cellular Automaton Continuum Crowds

CS 231. Crowd Simulation. Outline. Introduction to Crowd Simulation. Flocking Social Forces 2D Cellular Automaton Continuum Crowds CS 231 Crowd Simulation Outline Introduction to Crowd Simulation Fields of Study & Applications Visualization vs. Realism Microscopic vs. Macroscopic Flocking Social Forces 2D Cellular Automaton Continuum

More information

Watch Out! A Framework for Evaluating Steering Behaviors

Watch Out! A Framework for Evaluating Steering Behaviors Watch Out! A Framework for Evaluating Steering Behaviors Shawn Singh, Mishali Naik, Mubbasir Kapadia, Petros Faloutsos, and Glenn Reinman University of California, Los Angeles Abstract. Interactive virtual

More information

Synthesis by Example. Connecting Motion Planning and Example based Movement. Michael Gleicher

Synthesis by Example. Connecting Motion Planning and Example based Movement. Michael Gleicher Synthesis by Example Connecting Motion Planning and Example based Movement Michael Gleicher Dept of Computer Sciences University of Wisconsin Madison Case Study 1 Part I. Michael Gleicher 1 What is Motion

More information

Traffic/Flocking/Crowd AI. Gregory Miranda

Traffic/Flocking/Crowd AI. Gregory Miranda Traffic/Flocking/Crowd AI Gregory Miranda Introduction What is Flocking? Coordinated animal motion such as bird flocks and fish schools Initially described by Craig Reynolds Created boids in 1986, generic

More information

Motion Synthesis and Editing. Yisheng Chen

Motion Synthesis and Editing. Yisheng Chen Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional

More information

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper): Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation

More information

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map

Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie

More information

CrowdMixer: Multiple Agent Types in Situation-Based Crowd Simulations

CrowdMixer: Multiple Agent Types in Situation-Based Crowd Simulations CrowdMixer: Multiple Agent Types in Situation-Based Crowd Simulations Shannon Blyth and Howard J. Hamilton Department of Computer Science University of Regina, Regina, SK, Canada S4S 0A2 {blyth1sh, hamilton}@cs.uregina.ca

More information

A Memory Model for Autonomous Virtual Humans

A Memory Model for Autonomous Virtual Humans A Memory Model for Autonomous Virtual Humans Christopher Peters Carol O Sullivan Image Synthesis Group, Trinity College, Dublin 2, Republic of Ireland email: {christopher.peters, carol.osullivan}@cs.tcd.ie

More information

Intuitive Crowd Behaviour in Dense Urban Environments using Local Laws

Intuitive Crowd Behaviour in Dense Urban Environments using Local Laws Intuitive Crowd Behaviour in Dense Urban Environments using Local Laws Celine Loscos University College London C.Loscos@cs.ucl.ac.uk David Marchal Ecole Polytechnique Paris Alexandre Meyer University College

More information

Collision Avoidance with Unity3d

Collision Avoidance with Unity3d Collision Avoidance with Unity3d Jassiem Ifill September 12, 2013 Abstract The primary goal of the research presented in this paper is to achieve natural crowd simulation and collision avoidance within

More information

Motion Control with Strokes

Motion Control with Strokes Motion Control with Strokes Masaki Oshita Kyushu Institute of Technology oshita@ces.kyutech.ac.jp Figure 1: Examples of stroke-based motion control. Input strokes (above) and generated motions (below).

More information

Synthesizing Human Motion From Intuitive Constraints

Synthesizing Human Motion From Intuitive Constraints University of Pennsylvania ScholarlyCommons Center for Human Modeling and Simulation Department of Computer & Information Science 6-10-2008 Synthesizing Human Motion From Intuitive Constraints Alla Safonova

More information

Under the Guidance of

Under the Guidance of Presented by Linga Venkatesh (10305085) Deepak Jayanth (10305913) Under the Guidance of Prof. Parag Chaudhuri Flocking/Swarming/Schooling Nature Phenomenon Collective Behaviour by animals of same size

More information

SYNTHETIC VISION AND EMOTION CALCULATION IN INTELLIGENT VIRTUAL HUMAN MODELING

SYNTHETIC VISION AND EMOTION CALCULATION IN INTELLIGENT VIRTUAL HUMAN MODELING SYNTHETIC VISION AND EMOTION CALCULATION IN INTELLIGENT VIRTUAL HUMAN MODELING Y. Zhao, J. Kang and D. K. Wright School of Engineering and Design, Brunel University, Uxbridge, Middlesex, UB8 3PH, UK ABSTRACT

More information

Machine Learning for Video-Based Rendering

Machine Learning for Video-Based Rendering Machine Learning for Video-Based Rendering Arno Schadl arno@schoedl. org Irfan Essa irjan@cc.gatech.edu Georgia Institute of Technology GVU Center / College of Computing Atlanta, GA 30332-0280, USA. Abstract

More information

CS 378: Computer Game Technology

CS 378: Computer Game Technology CS 378: Computer Game Technology Dynamic Path Planning, Flocking Spring 2012 University of Texas at Austin CS 378 Game Technology Don Fussell Dynamic Path Planning! What happens when the environment changes

More information

Behavioral Animation in Crowd Simulation. Ying Wei

Behavioral Animation in Crowd Simulation. Ying Wei Behavioral Animation in Crowd Simulation Ying Wei Goal Realistic movement: Emergence of crowd behaviors consistent with real-observed crowds Collision avoidance and response Perception, navigation, learning,

More information

Efficient Crowd Simulation for Mobile Games

Efficient Crowd Simulation for Mobile Games 24 Efficient Crowd Simulation for Mobile Games Graham Pentheny 24.1 Introduction 24.2 Grid 24.3 Flow Field 24.4 Generating the Flow Field 24.5 Units 24.6 Adjusting Unit Movement Values 24.7 Mobile Limitations

More information

Particle Systems. Typical Time Step. Particle Generation. Controlling Groups of Objects: Particle Systems. Flocks and Schools

Particle Systems. Typical Time Step. Particle Generation. Controlling Groups of Objects: Particle Systems. Flocks and Schools Particle Systems Controlling Groups of Objects: Particle Systems Flocks and Schools A complex, fuzzy system represented by a large collection of individual elements. Each element has simple behavior and

More information

Research Article Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

Research Article Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model Mathematical Problems in Engineering Volume 2012, Article ID 918497, 15 pages doi:10.1155/2012/918497 Research Article Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental

More information

Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots

Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots Jingyu Xiang, Yuichi Tazaki, Tatsuya Suzuki and B. Levedahl Abstract This research develops a new roadmap

More information

CONSTRUCTING VIRTUAL HUMAN LIFE SIMULATIONS

CONSTRUCTING VIRTUAL HUMAN LIFE SIMULATIONS CONSTRUCTING VIRTUAL HUMAN LIFE SIMULATIONS Marcelo Kallmann, Etienne de Sevin and Daniel Thalmann Swiss Federal Institute of Technology (EPFL), Computer Graphics Lab (LIG), Lausanne, Switzerland, CH-1015,

More information

Elastic Bands: Connecting Path Planning and Control

Elastic Bands: Connecting Path Planning and Control Elastic Bands: Connecting Path Planning and Control Sean Quinlan and Oussama Khatib Robotics Laboratory Computer Science Department Stanford University Abstract Elastic bands are proposed as the basis

More information

Machine Learning for Video-Based Rendering

Machine Learning for Video-Based Rendering Machine Learning for Video-Based Rendering Arno Schödl arno@schoedl.org Irfan Essa irfan@cc.gatech.edu Georgia Institute of Technology GVU Center / College of Computing Atlanta, GA 30332-0280, USA. Abstract

More information

Character Animation Seminar Report: Complementing Physics with Motion Capture

Character Animation Seminar Report: Complementing Physics with Motion Capture Character Animation Seminar Report: Complementing Physics with Motion Capture Stefan John 1, and Alexis Heloir 2 1 Saarland University, Computer Graphics Lab, Im Stadtwald Campus E 1 1, 66123 Saarbrücken,

More information

ENHANCING THE CONTROL AND PERFORMANCE OF PARTICLE SYSTEMS THROUGH THE USE OF LOCAL ENVIRONMENTS. Abstract

ENHANCING THE CONTROL AND PERFORMANCE OF PARTICLE SYSTEMS THROUGH THE USE OF LOCAL ENVIRONMENTS. Abstract ENHANCING THE CONTROL AND PERFORMANCE OF PARTICLE SYSTEMS THROUGH THE USE OF LOCAL ENVIRONMENTS Daniel O. Kutz Richard R. Eckert State University of New York at Binghamton Binghamton, NY 13902 Abstract

More information

Animation. Itinerary. What is Animation? What is Animation? Animation Methods. Modeling vs. Animation Computer Graphics Lecture 22

Animation. Itinerary. What is Animation? What is Animation? Animation Methods. Modeling vs. Animation Computer Graphics Lecture 22 15-462 Computer Graphics Lecture 22 Animation April 22, 2003 M. Ian Graham Carnegie Mellon University What is Animation? Making things move What is Animation? Consider a model with n parameters Polygon

More information

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization M. Shahab Alam, M. Usman Rafique, and M. Umer Khan Abstract Motion planning is a key element of robotics since it empowers

More information

THE RELATIONSHIP BETWEEN PROCEDURAL GENERATION TECHNIQUES: CELLULAR AUTOMATA AND NOISE JESSE HIGNITE. Advisor DANIEL PLANTE

THE RELATIONSHIP BETWEEN PROCEDURAL GENERATION TECHNIQUES: CELLULAR AUTOMATA AND NOISE JESSE HIGNITE. Advisor DANIEL PLANTE THE RELATIONSHIP BETWEEN PROCEDURAL GENERATION TECHNIQUES: CELLULAR AUTOMATA AND NOISE by JESSE HIGNITE Advisor DANIEL PLANTE A senior research proposal submitted in partial fulfillment of the requirements

More information

11 Behavioural Animation. Chapter 11. Behavioural Animation. Department of Computer Science and Engineering 11-1

11 Behavioural Animation. Chapter 11. Behavioural Animation. Department of Computer Science and Engineering 11-1 Chapter 11 Behavioural Animation 11-1 Behavioral Animation Knowing the environment Aggregate behavior Primitive behavior Intelligent behavior Crowd management 11-2 Behavioral Animation 11-3 Knowing the

More information

Full-Body Behavioral Path Planning in Cluttered Environments

Full-Body Behavioral Path Planning in Cluttered Environments In Proceedings of the ACM SIGGRAPH Conference on Motion in Games (MIG), 2016. This is the manuscript of the authors. Full-Body Behavioral Path Planning in Cluttered Environments Alain Juarez-Perez University

More information

Animation. Itinerary Computer Graphics Lecture 22

Animation. Itinerary Computer Graphics Lecture 22 15-462 Computer Graphics Lecture 22 Animation April 22, 2003 M. Ian Graham Carnegie Mellon University Itinerary Review Basic Animation Keyed Animation Motion Capture Physically-Based Animation Behavioral

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al CS 231 Motion Capture Data I The Pipeline Bodenheimer et al 1 Marker Magnetic Optical Marker placement On limbs vs joints neither is ideal Over tight clothing or thin skin In repeatable 'landmarks' Using

More information

Graph-based High Level Motion Segmentation using Normalized Cuts

Graph-based High Level Motion Segmentation using Normalized Cuts Graph-based High Level Motion Segmentation using Normalized Cuts Sungju Yun, Anjin Park and Keechul Jung Abstract Motion capture devices have been utilized in producing several contents, such as movies

More information

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Christos Mousas Paul Newbury Department of Informatics University of Sussex East Sussex, Brighton BN1 9QH Email: {c.mousas,

More information

Cloth Simulation. Tanja Munz. Master of Science Computer Animation and Visual Effects. CGI Techniques Report

Cloth Simulation. Tanja Munz. Master of Science Computer Animation and Visual Effects. CGI Techniques Report Cloth Simulation CGI Techniques Report Tanja Munz Master of Science Computer Animation and Visual Effects 21st November, 2014 Abstract Cloth simulation is a wide and popular area of research. First papers

More information

Using Synthetic Vision for Autonomous Non-Player Characters in Computer Games

Using Synthetic Vision for Autonomous Non-Player Characters in Computer Games Using Synthetic Vision for Autonomous Non-Player Characters in Computer Games Sebastian Enrique 1, Alan Watt 2, Steve Maddock 2, Fabio Policarpo 3 1 Departamento de Computación, Facultad de Ciencias Exactas

More information

Hierarchical Impostors for the Flocking Algorithm in 3D

Hierarchical Impostors for the Flocking Algorithm in 3D Volume 21 (2002), number 4 pp. 723 731 COMPUTER GRAPHICS forum Hierarchical Impostors for the Flocking Algorithm in 3D Noel O Hara Fruition Technologies Ltd, Dublin, Ireland Abstract The availability of

More information

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction Motion Texture Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays 1. Introduction Motion capture data is often used in movies and video games because it is able to realistically depict human

More information

Generating Different Realistic Humanoid Motion

Generating Different Realistic Humanoid Motion Generating Different Realistic Humanoid Motion Zhenbo Li,2,3, Yu Deng,2,3, and Hua Li,2,3 Key Lab. of Computer System and Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing

More information

Angular momentum guided motion concatenation. Introduction. Related Work. By Hubert P. H. Shum *, Taku Komura and Pranjul Yadav

Angular momentum guided motion concatenation. Introduction. Related Work. By Hubert P. H. Shum *, Taku Komura and Pranjul Yadav COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds (2009) Published online in Wiley InterScience (www.interscience.wiley.com).315 Angular momentum guided motion concatenation By Hubert P.

More information

Computer Graphics. Si Lu. Fall uter_graphics.htm 11/27/2017

Computer Graphics. Si Lu. Fall uter_graphics.htm 11/27/2017 Computer Graphics Si Lu Fall 2017 http://web.cecs.pdx.edu/~lusi/cs447/cs447_547_comp uter_graphics.htm 11/27/2017 Last time o Ray tracing 2 Today o Animation o Final Exam: 14:00-15:30, Novermber 29, 2017

More information

Obstacle Avoidance Project: Final Report

Obstacle Avoidance Project: Final Report ERTS: Embedded & Real Time System Version: 0.0.1 Date: December 19, 2008 Purpose: A report on P545 project: Obstacle Avoidance. This document serves as report for P545 class project on obstacle avoidance

More information

Intelligent Third-Person Control of 3D Avatar Motion

Intelligent Third-Person Control of 3D Avatar Motion Appear in Proceedings of the 7th International Symposium on Smart Graphics, 2007. Intelligent Third-Person Control of 3D Avatar Motion Chun-Chieh Chen 1, Tsai-Yen Li 1 1 Computer Science Department, National

More information

STEERING BEHAVIORS. Markéta Popelová, marketa.popelova [zavináč] matfyz.cz. 2012, Umělé Bytosti, MFF UK

STEERING BEHAVIORS. Markéta Popelová, marketa.popelova [zavináč] matfyz.cz. 2012, Umělé Bytosti, MFF UK STEERING BEHAVIORS Markéta Popelová, marketa.popelova [zavináč] matfyz.cz 2012, Umělé Bytosti, MFF UK MOTIVATION MOTIVATION REQUIREMENTS FOR MOTION CONTROL Responding to dynamic environment Avoiding obstacles

More information

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk

More information

Real-time Crowd Movement On Large Scale Terrains

Real-time Crowd Movement On Large Scale Terrains Real-time Crowd Movement On Large Scale Terrains Wen Tang, Tao Ruan Wan* and Sanket Patel School of Computing and Mathematics, University of Teesside, Middlesbrough, United Kingdom E-mail: w.tang@tees.ac.uk

More information

Instant Prediction for Reactive Motions with Planning

Instant Prediction for Reactive Motions with Planning The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA Instant Prediction for Reactive Motions with Planning Hisashi Sugiura, Herbert Janßen, and

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Animating Non-Human Characters using Human Motion Capture Data

Animating Non-Human Characters using Human Motion Capture Data Animating Non-Human Characters using Human Motion Capture Data Laurel Bancroft 1 and Jessica Hodgins 2 1 College of Fine Arts, Carngie Mellon University, lbancrof@andrew.cmu.edu 2 Computer Science, Carnegie

More information

A steering model for on-line locomotion synthesis. Introduction. By Taesoo Kwon and Sung Yong Shin *

A steering model for on-line locomotion synthesis. Introduction. By Taesoo Kwon and Sung Yong Shin * COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 2007; 18: 463 472 Published online 13 July 2007 in Wiley InterScience (www.interscience.wiley.com).185 A steering model for on-line locomotion

More information

Probabilistic Methods for Kinodynamic Path Planning

Probabilistic Methods for Kinodynamic Path Planning 16.412/6.834J Cognitive Robotics February 7 th, 2005 Probabilistic Methods for Kinodynamic Path Planning Based on Past Student Lectures by: Paul Elliott, Aisha Walcott, Nathan Ickes and Stanislav Funiak

More information

Crowd simulation of pedestrians in a virtual city

Crowd simulation of pedestrians in a virtual city Crowd simulation of pedestrians in a virtual city Submitted in partial fulfilment of the requirements of the degree Bachelor of Science (Honours) of Rhodes University Flora Ponjou Tasse November 3, 2008

More information

Environmental Modeling for Autonomous Virtual Pedestrians

Environmental Modeling for Autonomous Virtual Pedestrians 05DHM-55 Environmental Modeling for Autonomous Virtual Pedestrians Copyright 2005 SAE International Wei Shao and Demetri Terzopoulos Media Research Lab, Courant Institute, New York University ABSTRACT

More information

Automatic High Level Avatar Guidance Based on Affordance of Movement

Automatic High Level Avatar Guidance Based on Affordance of Movement EUROGRAPHICS 2003 / M. Chover, H. Hagen and D. Tost Short Presentations Automatic High Level Avatar Guidance Based on Affordance of Movement Despina Michael and Yiorgos Chrysanthou Department of Computer

More information

Dynamic Adaptive Disaster Simulation: A Predictive Model of Emergency Behavior Using Cell Phone and GIS Data 1

Dynamic Adaptive Disaster Simulation: A Predictive Model of Emergency Behavior Using Cell Phone and GIS Data 1 Dynamic Adaptive Disaster Simulation: A Predictive Model of Emergency Behavior Using Cell Phone and GIS Data 1, Zhi Zhai, Greg Madey Dept. of Computer Science and Engineering University of Notre Dame Notre

More information

THE development of stable, robust and fast methods that

THE development of stable, robust and fast methods that 44 SBC Journal on Interactive Systems, volume 5, number 1, 2014 Fast Simulation of Cloth Tearing Marco Santos Souza, Aldo von Wangenheim, Eros Comunello 4Vision Lab - Univali INCoD - Federal University

More information

Simplified Walking: A New Way to Generate Flexible Biped Patterns

Simplified Walking: A New Way to Generate Flexible Biped Patterns 1 Simplified Walking: A New Way to Generate Flexible Biped Patterns Jinsu Liu 1, Xiaoping Chen 1 and Manuela Veloso 2 1 Computer Science Department, University of Science and Technology of China, Hefei,

More information

STEERING BEHAVIORS MOTIVATION REQUIREMENTS FOR MOTION CONTROL MOTIVATION BOIDS & FLOCKING MODEL STEERING BEHAVIORS - BASICS

STEERING BEHAVIORS MOTIVATION REQUIREMENTS FOR MOTION CONTROL MOTIVATION BOIDS & FLOCKING MODEL STEERING BEHAVIORS - BASICS Přednáška byla podpořena v rámci projektu OPPA CZ.2.17/3.1.00/33274 financovaného Evropským sociálním fondem a rozpočtem hlavního města Prahy. Evropský sociální fond Praha & EU: investujeme do Vaší budoucnosti

More information

Neural Networks for Obstacle Avoidance

Neural Networks for Obstacle Avoidance Neural Networks for Obstacle Avoidance Joseph Djugash Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 josephad@andrew.cmu.edu Bradley Hamner Robotics Institute Carnegie Mellon University

More information

Generating sparse navigation graphs for microscopic pedestrian simulation models

Generating sparse navigation graphs for microscopic pedestrian simulation models Generating sparse navigation graphs for microscopic pedestrian simulation models Angelika Kneidl 1, André Borrmann 1, Dirk Hartmann 2 1 Computational Modeling and Simulation Group, TU München, Germany

More information

Motion Control in Dynamic Multi-Robot Environments

Motion Control in Dynamic Multi-Robot Environments Motion Control in Dynamic Multi-Robot Environments Michael Bowling mhb@cs.cmu.edu Manuela Veloso mmv@cs.cmu.edu Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213-3890 Abstract

More information

Master s Thesis. Animal Stampede Simulation

Master s Thesis. Animal Stampede Simulation Master s Thesis Animal Stampede Simulation Akila Lakshminarayanan Brian Tran MSc Computer Animation and Visual Effects, NCCA 2011-2012 Abstract Large crowd scenes with humans and animals are abundant in

More information

Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot

Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot Koren Ward School of Information Technology and Computer Science University of Wollongong koren@uow.edu.au www.uow.edu.au/~koren Abstract

More information

Fast Local Planner for Autonomous Helicopter

Fast Local Planner for Autonomous Helicopter Fast Local Planner for Autonomous Helicopter Alexander Washburn talexan@seas.upenn.edu Faculty advisor: Maxim Likhachev April 22, 2008 Abstract: One challenge of autonomous flight is creating a system

More information

Crowd simulation influenced by agent s sociopsychological

Crowd simulation influenced by agent s sociopsychological HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ Crowd simulation influenced by agent s sociopsychological state F. Cherif, and R. Chighoub 48 Abstract The aim our work is to create virtual humans as

More information

The Analysis of Animate Object Motion using Neural Networks and Snakes

The Analysis of Animate Object Motion using Neural Networks and Snakes The Analysis of Animate Object Motion using Neural Networks and Snakes Ken Tabb, Neil Davey, Rod Adams & Stella George e-mail {K.J.Tabb, N.Davey, R.G.Adams, S.J.George}@herts.ac.uk http://www.health.herts.ac.uk/ken/vision/

More information

Chapter 2 Trajectory and Floating-Car Data

Chapter 2 Trajectory and Floating-Car Data Chapter 2 Trajectory and Floating-Car Data Measure what is measurable, and make measurable what is not so. Galileo Galilei Abstract Different aspects of traffic dynamics are captured by different measurement

More information

Toward realistic and efficient virtual crowds. Julien Pettré - June 25, 2015 Habilitation à Diriger des Recherches

Toward realistic and efficient virtual crowds. Julien Pettré - June 25, 2015 Habilitation à Diriger des Recherches Toward realistic and efficient virtual crowds Julien Pettré - June 25, 2015 Habilitation à Diriger des Recherches A short Curriculum 2 2003 PhD degree from the University of Toulouse III Locomotion planning

More information

Linear combinations of simple classifiers for the PASCAL challenge

Linear combinations of simple classifiers for the PASCAL challenge Linear combinations of simple classifiers for the PASCAL challenge Nik A. Melchior and David Lee 16 721 Advanced Perception The Robotics Institute Carnegie Mellon University Email: melchior@cmu.edu, dlee1@andrew.cmu.edu

More information

Cloth Animation with Collision Detection

Cloth Animation with Collision Detection Cloth Animation with Collision Detection Mara Guimarães da Silva Figure 1: Cloth blowing in the wind. Abstract This document reports the techniques and steps used to implemented a physically based animation

More information

Tracking of Human Body using Multiple Predictors

Tracking of Human Body using Multiple Predictors Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,

More information

A Neural Classifier for Anomaly Detection in Magnetic Motion Capture

A Neural Classifier for Anomaly Detection in Magnetic Motion Capture A Neural Classifier for Anomaly Detection in Magnetic Motion Capture Iain Miller 1 and Stephen McGlinchey 2 1 University of Paisley, Paisley. PA1 2BE, UK iain.miller@paisley.ac.uk, 2 stephen.mcglinchey@paisley.ac.uk

More information

An Open Framework for Developing, Evaluating, and Sharing Steering Algorithms

An Open Framework for Developing, Evaluating, and Sharing Steering Algorithms An Open Framework for Developing, Evaluating, and Sharing Steering Algorithms Shawn Singh, Mubbasir Kapadia, Petros Faloutsos, and Glenn Reinman University of California, Los Angeles Abstract. There are

More information

Collaborators. Multiple Agents & Crowd Simulation: training sytems 5/15/2010. Interactive Multi-Robot Planning and Multi-Agent Simulation

Collaborators. Multiple Agents & Crowd Simulation: training sytems 5/15/2010. Interactive Multi-Robot Planning and Multi-Agent Simulation Interactive Multi-Robot Planning and Multi-Agent Simulation Dinesh Manocha UNC Chapel Hill dm@cs.unc.edu http://gamma.cs.unc.edu Collaborators Ming C. Lin Jur van der Berg Sean Curtis Russell Gayle Stephen

More information

CS 4758 Robot Navigation Through Exit Sign Detection

CS 4758 Robot Navigation Through Exit Sign Detection CS 4758 Robot Navigation Through Exit Sign Detection Aaron Sarna Michael Oleske Andrew Hoelscher Abstract We designed a set of algorithms that utilize the existing corridor navigation code initially created

More information

The Analysis of Animate Object Motion using Neural Networks and Snakes

The Analysis of Animate Object Motion using Neural Networks and Snakes The Analysis of Animate Object Motion using Neural Networks and Snakes Ken Tabb, Neil Davey, Rod Adams & Stella George e-mail {K.J.Tabb, N.Davey, R.G.Adams, S.J.George}@herts.ac.uk http://www.health.herts.ac.uk/ken/vision/

More information

Complex behavior emergent from simpler ones

Complex behavior emergent from simpler ones Reactive Paradigm: Basics Based on ethology Vertical decomposition, as opposed to horizontal decomposition of hierarchical model Primitive behaviors at bottom Higher behaviors at top Each layer has independent

More information

Real-time Path Planning and Navigation for Multi-Agent and Heterogeneous Crowd Simulation

Real-time Path Planning and Navigation for Multi-Agent and Heterogeneous Crowd Simulation Real-time Path Planning and Navigation for Multi-Agent and Heterogeneous Crowd Simulation Ming C. Lin Department of Computer Science University of North Carolina at Chapel Hill lin@cs.unc.edu Joint work

More information

Modeling Physically Simulated Characters with Motion Networks

Modeling Physically Simulated Characters with Motion Networks In Proceedings of Motion In Games (MIG), Rennes, France, 2012 Modeling Physically Simulated Characters with Motion Networks Robert Backman and Marcelo Kallmann University of California Merced Abstract.

More information

Image Processing Techniques and Smart Image Manipulation : Texture Synthesis

Image Processing Techniques and Smart Image Manipulation : Texture Synthesis CS294-13: Special Topics Lecture #15 Advanced Computer Graphics University of California, Berkeley Monday, 26 October 2009 Image Processing Techniques and Smart Image Manipulation : Texture Synthesis Lecture

More information

Data-driven Approaches to Simulation (Motion Capture)

Data-driven Approaches to Simulation (Motion Capture) 1 Data-driven Approaches to Simulation (Motion Capture) Ting-Chun Sun tingchun.sun@usc.edu Preface The lecture slides [1] are made by Jessica Hodgins [2], who is a professor in Computer Science Department

More information

Image resizing and image quality

Image resizing and image quality Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 2001 Image resizing and image quality Michael Godlewski Follow this and additional works at: http://scholarworks.rit.edu/theses

More information

COMP 175 COMPUTER GRAPHICS. Lecture 10: Animation. COMP 175: Computer Graphics March 12, Erik Anderson 08 Animation

COMP 175 COMPUTER GRAPHICS. Lecture 10: Animation. COMP 175: Computer Graphics March 12, Erik Anderson 08 Animation Lecture 10: Animation COMP 175: Computer Graphics March 12, 2018 1/37 Recap on Camera and the GL Matrix Stack } Go over the GL Matrix Stack 2/37 Topics in Animation } Physics (dynamics, simulation, mechanics)

More information

Interactive Computer Graphics

Interactive Computer Graphics Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create

More information

Path Finding and Collision Avoidance in Crowd Simulation

Path Finding and Collision Avoidance in Crowd Simulation Journal of Computing and Information Technology - CIT 17, 2009, 3, 217 228 doi:10.2498/cit.1000873 217 Path Finding and Collision Avoidance in Crowd Simulation Cherif Foudil 1, Djedi Noureddine 1, Cedric

More information

Construction site pedestrian simulation with moving obstacles

Construction site pedestrian simulation with moving obstacles Construction site pedestrian simulation with moving obstacles Giovanni Filomeno 1, Ingrid I. Romero 1, Ricardo L. Vásquez 1, Daniel H. Biedermann 1, Maximilian Bügler 1 1 Lehrstuhl für Computergestützte

More information

NEURAL NETWORK VISUALIZATION

NEURAL NETWORK VISUALIZATION Neural Network Visualization 465 NEURAL NETWORK VISUALIZATION Jakub Wejchert Gerald Tesauro IB M Research T.J. Watson Research Center Yorktown Heights NY 10598 ABSTRACT We have developed graphics to visualize

More information

Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control

Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control Yoshiaki Kuwata and Jonathan P. How Space Systems Laboratory Massachusetts Institute of Technology {kuwata,jhow}@mit.edu

More information

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Tommie J. Liddy and Tien-Fu Lu School of Mechanical Engineering; The University

More information

MIKE: a Multimodal Cinematographic Editor for Virtual Worlds

MIKE: a Multimodal Cinematographic Editor for Virtual Worlds MIKE: a Multimodal Cinematographic Editor for Virtual Worlds Bruno de Araújo, André Campos, Joaquim A. Jorge Department of Information Systems and Computer Science INESC-ID/IST/Technical University of

More information

A Dynamics-based Comparison Metric for Motion Graphs

A Dynamics-based Comparison Metric for Motion Graphs The Visual Computer manuscript No. (will be inserted by the editor) Mikiko Matsunaga, Victor B. Zordan University of California, Riverside A Dynamics-based Comparison Metric for Motion Graphs the date

More information