Architecture of an Animation System for Human Characters

Size: px
Start display at page:

Download "Architecture of an Animation System for Human Characters"

Transcription

1 Architecture of an Animation System for Human Characters T. Pejša * and I.S. Pandžić * * University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia (tomislav.pejsa, igor.pandzic)@fer.hr Abstract Virtual human characters are found in a broad range of applications, from movies, games and networked virtual environments to teleconferencing and tutoring applications. Such applications are available on a variety of platforms, from desktop and web to mobile devices. Highquality animation is an essential prerequisite for realistic and believable virtual characters. Though researchers and application developers have ample animation techniques for virtual characters at their disposal, implementation of these techniques into an existing application tends to be a daunting and time-consuming task. In this paper we present visage SDK, a versatile framework for real-time character animation based on MPEG-4 FBA standard that offers a wide spectrum of features that includes animation playback, lip synchronization and facial motion tracking, while facilitating rapid production of art assets and easy integration with existing graphics engines. I. INTRODUCTION Virtual characters have long been a staple of the entertainment industry namely, motion pictures and electronic games but in more recent times they have also found application in numerous other areas, such as education, communications, healthcare and business, where they are found in roles of avatars, virtual tutors, assistants, companions etc. A category of virtual characters that has been an exceptionally active topic of research are embodied conversational agents (ECAs), characters that interact with real humans in direct, face-toface conversations. Virtual character applications are of great potential interest to the field of telecommunications. Wellarticulated human characters are a common feature in networked virtual environments such as Second Life, Google Lively and World of Warcraft, where they are found in roles of user avatars and non-player characters (NPCs). A potential use of virtual characters is in video conferences, where digital avatars can be used to replace video streams of human participants and thus conserve bandwidth. Up to recently virtual characters have been almost exclusive to desktop and browser-based network applications, but growing processing power of mobile platforms now allows their use in mobile applications as well. These developments have resulted in increasing demand for high-quality visual simulation of virtual humans. This visual simulation consists of two aspects graphical model and animation. The latter encompasses body animation (locomotion, gestures) and facial animation (expressions, lip movements, facial gestures). While many open-source and proprietary rendering solutions deliver excellent graphical quality, their animation functionality, particularly facial animation, is often limited. Moreover, they often offer limited or no tools for production of characters and animations, requiring the user to invest a great deal of effort into setting up a suitable art pipeline. Our system seeks to address this by delivering greater animation capabilities, while being general enough to work with any 3D engine and thus facilitating development of applications with cutting edge visuals. Our principal contributions are these: design of a character animation system architecture that supports advanced animation features and provides tools for production of new character animations assets with minimal expenditure of time and effort a model for decoupling animation, asset production and rendering to enable fast and easy integration of the system with different graphics engines and application frameworks Facial motion tracking, lip synchronization and other advanced features make visage SDK especially suited for applications such as ECAs and low-bandwidth video communications. Due to simplicity of art asset production our system is ideal for researchers with limited resources at their disposal. We begin with a brief summary of related work and continue with an overview of our system's features, followed by a description of the underlying architecture. Finally, we discuss our future work and planned improvements to the system. II. RELATED WORK Though virtual characters have been a highly active area of research for years, little effort has been made to propose a system which would integrate various aspects of their visual simulation and be easily usable in combination with different graphics engines and for a broad range of applications. The most recent and ambitious effort is SmartBody, a modular system for animation and behavior modeling of ECAs [1]. SmartBody sports more advanced low-level animation than visage SDK, featuring hierarchies of customizable, scheduled controllers. SmartBody also supports behavior modeling through Behavior markup language (BML) scripts [2]. However, SmartBody lacks some of visage SDK's integrated functionality, namely face tracking, lip sync and visual text-to-speech and has no built-in capabilities for character model production. It also features a less common method of interfacing with

2 the renderer namely, via TCP whereas visage SDK is statically or dynamically linked with the main engine. The new visage SDK system builds upon the earlier visage framework for facial animation [3], introducing new features such as body animation support and facial motion tracking. It also greatly enhances integration capabilities by enabling easy integration into other graphics engines. Engines for simulations and electronic games typically have modular and extensible architectures, and it is common for such engines to feature third-party components. Companies such as Havok and NaturalMotion even specialize in developing modular animation and physics systems intended to be integrated into existing architectures. These architectural concepts are commonly found in non-science literature on graphics engine design and we found such resources to be very suitable references during development of our system [13] [14] [15]. III. FEATURES visage SDK includes the following core features: animation playback lip synchronization visual text-to-speech (VTTS) conversion facial motion tracking from video In addition to these, visage SDK also includes functionality for automatic off-line production of character models and their preparation for real-time animation: face model generation from photographs morph target cloning This functionality can be integrated into the user's own applications and it is also available as full-featured standalone tools or plug-ins for 3D modeling software. A. Animation playback visage SDK animation system is based on MPEG-4 Face and Body Animation (FBA) standard [4] [5], which defines a set of animation parameters (FBAPs) needed for detailed and efficient animation of virtual humans. These parameters can be divided into the following categories: body animation parameters (BAPs) these parameters control individual degrees of freedom (DOFs) of the character's skeleton (e.g., r_shoulder_abduct) low-level facial animation parameters (FAPs) these control movements of individual facial features (e.g., open_jaw or raise_l_i_eyebrow; see Fig. 1) expression high-level FAP which controls the facial expression (e.g., joy or sadness) viseme high-level FAP which controls the shape of the lips during speech (e.g., TH or aa) Animation in MPEG-4 FBA is nothing more than a temporal sequence of FBAP value sets. Our system is capable of loading FBA animations from MPEG-4 standard file format and applying them, frame-by-frame, to the character model. How each FBAP value is applied to the model depends on the graphics engine visage SDK doesn't concern itself with details of FBAP implementation. Figure 1: MPEG-4 FBA face, marked with facial definition parameters (FDPs) Figure 2: Face model imported from FaceGen and animated in visage SDK B. Lip synchronization visage SDK features a lip sync component for both online and off-line applications. Speech signal is analyzed and classified into visemes using neural networks (NNs). A genetic algorithm (GA) is used to automatically train the NNs [6] [8]. Our lip sync implementation is language-independent and has been successfully used with a number of different languages, including English, Croatian, Swedish and Japanese [7]. C. Visual text-to-speech visage SDK features a simple visual text-to-speech (VTTS) system based on Microsoft SAPI. It converts the SAPI output into a sequence of FBA visemes [9]. D. Facial motion tracking The facial motion tracker tracks facial movements of a real person from recorded or live video stream. The motion tracking algorithm is based on active appearance models (AAM) and doesn't require markers or special cameras a simple, low-cost webcam is sufficient. Tracked motion is encoded as a sequence of FAP values and applied to the virtual character in real-time. In addition to this functionality, the facial motion tracker also supports automatic feature detection in static 2D images, which can be used to further automate the process of face model generation from photographs (see next section) [10]. Potential applications of the system include humancomputer interaction and teleconferencing, where it can be

3 used to drive 3D avatars with the purpose of replacing video streams of human participants. E. Face model generation from photos Face model generator can be used to rapidly generate 3D face models. It takes a collection of orthogonal photographs of the head as input and uses them to deform a generic template face and produce a face model that matches the individual in the photographs [11]. Since the resulting models always have the same topology, the cloner can automatically generate morph targets for facial animation. F. Facial motion cloning The cloner copies morph targets from a source face model onto a target model [12]. For arbitrary models it requires that the user maps a set of feature points (FDPs) to vertices of the model, though this step can be bypassed if the target model and the source model have identical topologies. The cloner also supports fully automated processing of face models generated by Singular Inversions FaceGen application (Fig. 2). IV. ARCHITECTURE A. Components visage SDK has a multi-layered architecture and is composed of the following key components: Scene wrapper Animation player High-level components lip sync, TTS, face tracker, character model production libraries (face model generator, facial motion cloner) Scene wrapper provides a common, rendererindependent interface to the character model in the scene. Its main task is to interpret animation parameter values and apply them to the model. Furthermore, it aggregates information about the character model pertinent to MPEG- 4 FBA most notably mappings of FBAPs to skeleton joint transformations and mesh morph targets. This highlevel model data can be loaded and serialized to an XMLbased file format called VCM (Visage Character Model). Finally, scene wrapper also provides direct access to the model's geometry (meshes and morph targets) and joint transformations, permitting model production components to work with any model irrespective of the underlying renderer. Animation player is the core runtime component of the system, tasked with playing generalized FBA actions. These actions can be animations loaded from MPEG-4.fba files, but also procedural actions such as gaze following. Animation player can play the actions in its own thread or it can be updated manually in every frame. High-level components include lip sync, text-to-speech and facial motion tracker. They are implemented as FBA actions and therefore driven by the animation player. Character model production components are meant to be used off-line and so they don't interface with the Application visage SDK High-level components LipSync Configure actions & add them to the player Text-to-Speech Facial Motion Tracker visage SDK Model production Application Get FBAP values & blend Animation Player Apply FBAP value set Scene Wrapper Set bone transformations / morph weights Facial Motion Cloner Face Model Generator Scene Wrapper Get geometry Update geometry Get geometry Update geometry Rendering Engine Rendering Engine Figure 3: visage SDK architecture

4 animation player. They access the model's geometry via the common scene wrapper. B. Integration with a graphics engine When it comes to integration with graphics engines, visage SDK is highly flexible and places only bare minimum requirements before the target engine. The engine should support basic character animation techniques skeletal animation and mesh morphing and the engine's API should provide the ability to manually set joint transformations and morph target blend weights. Animation is possible even if some of these requirements aren't met for example, in absence of morph target support, a facial bone rig can be used for facial animation. Minimal integration of the system is a trivial endeavor, amounting to subclassing and implementation of a single wrapper class representing the character model. Depending on desired functionality, certain parts of the wrapper can be left unimplemented e.g. there is no need to provide geometry access if the developer doesn't plan to use the cloner of face model generation features in their application. The 3D model itself is loaded and handled by the engine, while FBAP mappings and other information pertaining to MPEG-4 FBA are loaded from VCM files. VCM files are tied to visage SDK rather than the graphics engine, which means they are portable and can be reused for a character model or even different models with a similar structure regardless of the underlying renderer. This greatly simplifies model production and reduces interdependence of the art pipelines. C. Component interactions A simplified overview of runtime component interactions is illustrated in Fig. 3. Animation process flows in the following manner: Application adds actions to the animation player. For example, lip sync coupled with gaze following and a set of simple repeating facial gestures (e.g. blinking). Animation player executes the animation loop. From each action it obtains the current frame of animation as a set of FBAP values, blends all the sets together and applies them to the character model via the wrapper. Scene wrapper receives the FBAP value set and interprets the values depending on the character's FBAP mappings. Typically, BAPs are converted to Euler angles and applied to bone transformations, while FAPs are interpreted as morph target blend weights. For cloner and face model generator interactions are even more straightforward and amount to obtaining and updating the model's geometry via the model wrapper. Figure 4: FBAPMapper an OGRE-based application for mapping animation parameters D. Art pipeline As previously indicated, the art pipeline is very flexible. Characters are modeled in 3D modeling applications and exported into the target engine. Naturally, FBAPs need to be mapped to joints and morph targets of the model. This is done using a special plug-in for the 3D modeling application if one is available, otherwise it needs to be handled by a stand-alone application with appropriate 3D format support. For animations the pipeline is similar, and again a plug-in is used for export and import. We also provide stand-alone face model and morph target production applications that use our production libraries. These applications rely on intermediate file formats (currently VRML or OGRE formats, though support for others will be added in the future) to obtain the model, while results are output via the intermediate format in combination with VCM. Fig. 4 shows a screenshot of a simple application for mapping and testing animation parameters. V. EXAMPLES We have so far successfully integrated our system with two open-source rendering engines, with more implementations on the way. The results are presented in this section. A. OGRE OGRE [16] is one of the most popular open-source, cross-platform rendering engines. Its features include a powerful object-oriented interface, support for both OpenGL and Direct3D graphical APIs, shader-driven architecture, material scripts, hardware-accelerated skeletal animation with manual bone control, hardwareaccelerated morph target animation etc. Despite challenges encountered in implementing a wrapper around certain features, we have achieves both face and body animation in OGRE (Fig. 5 and 6).

5 OGRE is also notable for its extensive art pipeline, supported by exporters from nearly every modeling suite in existence. We initially encountered difficulties in loading complex character models composed of multiple meshes, because basic OGRE doesn't support file formats capable of storing entire scenes. However, this shortcoming is rectified by the community-made DotScene loader plug-in, and a COLLADA loader is also under development by the OGRE community. B. Irrlicht Though Irrlicht [17] doesn't boast OGRE's power, it is nonetheless popular for its small size and ease of use. Its main shortcoming in regard to our system is lack of support for morph target animation. However, we were able to alleviate this by creating a face model with a bone rig and parametrizing it over MPEG-4 FBAPs, with very promising results (see Fig. 8). Unlike OGRE's art pipeline, which is based on exporter plug-ins for 3D modeling applications, Irrlicht's art pipeline relies on a large number of loaders for various file formats. We found the loader for Microsoft.x format to be the most suited to our needs and were able to successfully import several character models, both with body and face rig (Fig. 7). C. Upcoming implementations We are working on integrating visage SDK with several other engines in concurrence. These include: StudierStube (StbES) [19] a commercial augmented reality (AR) kit with a 3D renderer and support for character animation Horde3D [18] a lightweight, open-source renderer Panda3D an open-source game engine known for its intuitive Python-based API Of these we find StbES to be the most promising, as it will enable us to deliver the power of visage SDK animation system to mobile platforms and combine it with StSb's extensive AR features. VI. CONCLUSIONS AND FUTURE WORK Our system supports a variety of character animation features and facilitates rapid application development and art asset production. Its feature set makes it suitable for research and commercial applications such as embodied agents and avatars in networked virtual environments and telecommunications, while flexibility of its architecture means it can be used on a variety of platforms, including mobile devices. We have successfully integrated it with popular graphics engines and plan to provide more implementations in near future, while simultaneously striving to make integration even easier. Furthermore, we are continually working on enhancing our system with new features. An upcoming major upgrade will introduce a new system for interactive motion controls based on parametric motion graphs and introduce character behavior modeling capabilities via BML. Our goal is to develop a universal and modular system for powerful, yet intuitive modeling of character behavior and continue using it as a backbone of our research into high-level character control and applications involving virtual humans. We plan to release a substantial Figure 5: Lip sync in OGRE Figure 6: Body animation in OGRE

6 Figure 7: Body animation in Irrlicht Figure 8: Facial animation in Irrlicht portion of our system under an open-source license. ACKNOWLEDGMENT This work was partly carried out within the research project "Embodied Conversational Agents as interface for networked and mobile services" supported bythe Ministry of Science, Education and Sports of the Republic of Croatia. It was also partly supported by Visage Technologies. Integration of visage SDK with OGRE, Irrlicht and other engines was done by Mile Dogan, Danijel Pobi, Nikola Banko, Luka Šverko and Mario Medvedec, undergraduate students at the Faculty of Electrical Engineering and Computing in Zagreb, Croatia. REFERENCES [1] M. Thiebaux, A.N. Marshall, S. Marsella, M. Kallmann, "SmartBody: behavior realization for embodied conversational agents," in International Conference on Autonomous Agents, 2008, vol. 1, pp [2] S. Kopp et al., "Towards a common framework for multimodal generation: The behavior markup language," in Intelligent Virtual Agents, 2006, pp [3] I.S. Pandžić, J. Ahlberg, M. Wzorek, P. Rudol, M. Mošmondor, "Faces everywhere: towards ubiquitous production and delivery of face animation," in International Conference on Mobile and Ubiquitous Multimedia, 2003, pp [4] I.S. Pandžić, R. Forchheimer, Ed., MPEG-4 Facial Animation - The Standard, Implementations and Applications, John Wiley & Sons, [5] ISO/IEC MPEG-4 International Standard, Moving Picture Experts Group, [6] G. Zorić, I.S. Pandžić, "Real-time language independent lip synchronization method using a genetic algorithm," in special issue of Signal Processing Journal on Multimodal Human- Computer Interfaces, vol. 86, issue 12, pp , [7] A. Čereković et al., "Towards an embodied conversational agent talking in Croatian," in International Conference on Telecommunications, 2007, pp [8] G. Zorić, I.S. Pandžić, "A real-time lip sync system using a genetic algorithm for automatic neural network configuration," in IEEE International Conference on Multimedia & Expo, 2005, vol. 6, pp [9] C. Pelachaud, "Visual Text-to-Speech" in MPEG-4 Facial Animation - The Standard, Implementations and Applications, I.S. Pandžić, R. Forchheimer, Ed., John Wiley & Sons, [10] G. Fanelli, M. Fratarcangeli, "A non-invasive approach for driving virtual talking heads from real facial movements," in 3DTV Conference, 2007, pp [11] M. Fratarcangeli, M. Andolfi, K. Stanković, I.S. Pandžić, "Animatable face models from uncalibrated input features," unpublished [12] I.S. Pandžić, "Facial Motion Cloning," Graphical Models Journal, vol. 65, issue 6, pp , [13] D. Eberly, 3D Game Engine Architecture, Morgan Kaufmann, Elsevier, [14] S. Zerbst, O. Duvel, 3D Game Engine Programming, Course Technology PTR, [15] Havok Physics Animation 6.00 User Guide, Havok, [16] OGRE Manual v1.6, 2008, [17] Nicolas Schulz, Horde3D Documentation, 2009, [18] Nikolaus Gebhardt, Irrlicht Engine 1.5 API Documentation, 2008, [19] Christian Doppler Laboratory, Graz University of Technology, "Handheld augmented reality," 2008,

Faces Everywhere: Towards Ubiquitous Production and Delivery of Face Animation

Faces Everywhere: Towards Ubiquitous Production and Delivery of Face Animation Faces Everywhere: Towards Ubiquitous Production and Delivery of Face Animation Igor S. Pandzic 1, Jörgen Ahlberg 2, Mariusz Wzorek 2, Piotr Rudol 2, Miran Mosmondor 1 1 Department of Telecommunications

More information

Fast Facial Motion Cloning in MPEG-4

Fast Facial Motion Cloning in MPEG-4 Fast Facial Motion Cloning in MPEG-4 Marco Fratarcangeli and Marco Schaerf Department of Computer and Systems Science University of Rome La Sapienza frat,schaerf@dis.uniroma1.it Abstract Facial Motion

More information

Liv Personalized Avatars for Mobile Entertainment

Liv  Personalized Avatars for Mobile Entertainment LiveMail: Personalized Avatars for Mobile Entertainment Miran Mosmondor 1, Tomislav Kosutic 2, Igor S. Pandzic 3 1 Ericsson Nikola Tesla, Krapinska 45, p.p. 93, HR-10 002 Zagreb miran.mosmondor@ericsson.com

More information

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,

More information

Liv Personalized Avatars for Mobile Entertainment

Liv  Personalized Avatars for Mobile Entertainment LiveMail: Personalized Avatars for Mobile Entertainment Miran Mosmondor Ericsson Nikola Tesla, Krapinska 45, p.p. 93, HR-10 002 Zagreb miran.mosmondor@ericsson.com Tomislav Kosutic KATE-KOM, Drvinje 109,

More information

Automated Gesturing for Embodied Animated Agent: Speech-driven and Text-driven Approaches

Automated Gesturing for Embodied Animated Agent: Speech-driven and Text-driven Approaches 62 JOURNAL OF MULTIMEDIA, VOL. 1, NO. 1, APRIL 2006 Automated Gesturing for Embodied Animated Agent: Speech-driven and Text-driven Approaches Goranka Zoric* *Department of Telecommunications, Faculty of

More information

M I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies?

M I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies? MIRALab Where Research means Creativity Where do we stand today? M I RA Lab Nadia Magnenat-Thalmann MIRALab, University of Geneva thalmann@miralab.unige.ch Video Input (face) Audio Input (speech) FAP Extraction

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Advanced High Graphics

Advanced High Graphics VISUAL MEDIA FILE TYPES JPG/JPEG: (Joint photographic expert group) The JPEG is one of the most common raster file formats. It s a format often used by digital cameras as it was designed primarily for

More information

Standardized Prototyping and Development of Virtual Agents

Standardized Prototyping and Development of Virtual Agents Standardized Prototyping and Development of Virtual Agents Technical Report NWU-EECS-10-10 by Alex S. Hill for Justine Cassell Articulab Laboratory School of Communication and Mccormick School of Engineering

More information

Facial Deformations for MPEG-4

Facial Deformations for MPEG-4 Facial Deformations for MPEG-4 Marc Escher, Igor Pandzic, Nadia Magnenat Thalmann MIRALab - CUI University of Geneva 24 rue du Général-Dufour CH1211 Geneva 4, Switzerland {Marc.Escher, Igor.Pandzic, Nadia.Thalmann}@cui.unige.ch

More information

White Paper: Delivering Enterprise Web Applications on the Curl Platform

White Paper: Delivering Enterprise Web Applications on the Curl Platform White Paper: Delivering Enterprise Web Applications on the Curl Platform Table of Contents Table of Contents Executive Summary... 1 Introduction... 2 Background... 2 Challenges... 2 The Curl Solution...

More information

Cisco Digital Media System: Simply Compelling Communications

Cisco Digital Media System: Simply Compelling Communications Cisco Digital Media System: Simply Compelling Communications Executive Summary The Cisco Digital Media System enables organizations to use high-quality digital media to easily connect customers, employees,

More information

Virtual Human Creation Pipeline

Virtual Human Creation Pipeline Virtual Human Creation Pipeline Virtual Human Toolkit Workshop Patrick Kenny 9/24/2008 The projects or efforts depicted were or are sponsored by the U.S. Army Research, Development, and Engineering Command

More information

Text2Video: Text-Driven Facial Animation using MPEG-4

Text2Video: Text-Driven Facial Animation using MPEG-4 Text2Video: Text-Driven Facial Animation using MPEG-4 J. Rurainsky and P. Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz Institute Image Processing Department D-10587 Berlin, Germany

More information

A MAYA Exporting Plug-in for MPEG-4 FBA Human Characters

A MAYA Exporting Plug-in for MPEG-4 FBA Human Characters A MAYA Exporting Plug-in for MPEG-4 FBA Human Characters Yacine Amara 1, Mario Gutiérrez 2, Frédéric Vexo 2 and Daniel Thalmann 2 (1) Applied Mathematic Laboratory, EMP (Polytechnic Military School), BP

More information

Communication in Virtual Environments. Communication in Virtual Environments

Communication in Virtual Environments. Communication in Virtual Environments Communication in Virtual Environments Igor Linköping University www.bk.isy.liu.se/staff/igor Outline Goals of the workshop Virtual Environments and related topics Networked Collaborative Virtual Environments

More information

Real-time Talking Head Driven by Voice and its Application to Communication and Entertainment

Real-time Talking Head Driven by Voice and its Application to Communication and Entertainment ISCA Archive Real-time Talking Head Driven by Voice and its Application to Communication and Entertainment Shigeo MORISHIMA Seikei University ABSTRACT Recently computer can make cyberspace to walk through

More information

Data-Driven Face Modeling and Animation

Data-Driven Face Modeling and Animation 1. Research Team Data-Driven Face Modeling and Animation Project Leader: Post Doc(s): Graduate Students: Undergraduate Students: Prof. Ulrich Neumann, IMSC and Computer Science John P. Lewis Zhigang Deng,

More information

Principles of Computer Game Design and Implementation. Lecture 3

Principles of Computer Game Design and Implementation. Lecture 3 Principles of Computer Game Design and Implementation Lecture 3 We already knew Introduction to this module History of video High-level information for a game (such as Game platform, player motivation,

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

A Novel Unity-based Realizer for the Realization of Conversational Behavior on Embodied Conversational Agents

A Novel Unity-based Realizer for the Realization of Conversational Behavior on Embodied Conversational Agents A Novel Unity-based Realizer for the Realization of Conversational Behavior on Embodied Conversational Agents IZIDOR MLAKAR 1, ZDRAVKO KAČIČ 1, MATEJ BORKO 2, MATEJ ROJC 1 1 Faculty of Electrical Engineering

More information

An Interactive Interface for Directing Virtual Humans

An Interactive Interface for Directing Virtual Humans An Interactive Interface for Directing Virtual Humans Gael Sannier 1, Selim Balcisoy 2, Nadia Magnenat-Thalmann 1, Daniel Thalmann 2 1) MIRALab, University of Geneva, 24 rue du Général Dufour CH 1211 Geneva,

More information

Face Synthesis in the VIDAS project

Face Synthesis in the VIDAS project Face Synthesis in the VIDAS project Marc Escher 1, Igor Pandzic 1, Nadia Magnenat Thalmann 1, Daniel Thalmann 2, Frank Bossen 3 Abstract 1 MIRALab - CUI University of Geneva 24 rue du Général-Dufour CH1211

More information

Breathing life into your applications: Animation with Qt 3D. Dr Sean Harmer Managing Director, KDAB (UK)

Breathing life into your applications: Animation with Qt 3D. Dr Sean Harmer Managing Director, KDAB (UK) Breathing life into your applications: Animation with Qt 3D Dr Sean Harmer Managing Director, KDAB (UK) sean.harmer@kdab.com Contents Overview of Animations in Qt 3D Simple Animations Skeletal Animations

More information

============================================================================

============================================================================ 25 Free 3D modeling softwares Posted by Waldo - 2011/11/08 14:23 I thought this link may come in handy to a few designers out there. 25 Free Modeling Softwares Posted by admin - 2011/11/08 18:51 Blender

More information

Animation. CS 465 Lecture 22

Animation. CS 465 Lecture 22 Animation CS 465 Lecture 22 Animation Industry production process leading up to animation What animation is How animation works (very generally) Artistic process of animation Further topics in how it works

More information

FACE ANALYSIS AND SYNTHESIS FOR INTERACTIVE ENTERTAINMENT

FACE ANALYSIS AND SYNTHESIS FOR INTERACTIVE ENTERTAINMENT FACE ANALYSIS AND SYNTHESIS FOR INTERACTIVE ENTERTAINMENT Shoichiro IWASAWA*I, Tatsuo YOTSUKURA*2, Shigeo MORISHIMA*2 */ Telecommunication Advancement Organization *2Facu!ty of Engineering, Seikei University

More information

ADOBE CHARACTER ANIMATOR PREVIEW ADOBE BLOGS

ADOBE CHARACTER ANIMATOR PREVIEW ADOBE BLOGS page 1 / 6 page 2 / 6 adobe character animator preview pdf See Character Animator Help Documents for the latest documentation. A PDF version (English only at this time) for Preview 3 is available below.

More information

Speech Driven Synthesis of Talking Head Sequences

Speech Driven Synthesis of Talking Head Sequences 3D Image Analysis and Synthesis, pp. 5-56, Erlangen, November 997. Speech Driven Synthesis of Talking Head Sequences Peter Eisert, Subhasis Chaudhuri,andBerndGirod Telecommunications Laboratory, University

More information

A Hybrid System for Delivering Web Based Distance Learning and Teaching Material

A Hybrid System for Delivering Web Based Distance Learning and Teaching Material A Hybrid System for Delivering Web Based Distance Learning and Teaching Material Joel Greenberg The Open University 1 Introduction Over 150,000 students register with the Open University each year, including

More information

Level-of-Detail Triangle Strips for Deforming. meshes

Level-of-Detail Triangle Strips for Deforming. meshes Level-of-Detail Triangle Strips for Deforming Meshes Francisco Ramos 1, Miguel Chover 1, Jindra Parus 2 and Ivana Kolingerova 2 1 Universitat Jaume I, Castellon, Spain {Francisco.Ramos,chover}@uji.es 2

More information

New Media Production week 3

New Media Production week 3 New Media Production week 3 Multimedia ponpong@gmail.com What is Multimedia? Multimedia = Multi + Media Multi = Many, Multiple Media = Distribution tool & information presentation text, graphic, voice,

More information

Completing the Multimedia Architecture

Completing the Multimedia Architecture Copyright Khronos Group, 2011 - Page 1 Completing the Multimedia Architecture Erik Noreke Chair of OpenSL ES Working Group Chair of OpenMAX AL Working Group Copyright Khronos Group, 2011 - Page 2 Today

More information

Advanced Graphics and Animation

Advanced Graphics and Animation Advanced Graphics and Animation Character Marco Gillies and Dan Jones Goldsmiths Aims and objectives By the end of the lecture you will be able to describe How 3D characters are animated Skeletal animation

More information

Animation & AR Modeling Guide. version 3.0

Animation & AR Modeling Guide. version 3.0 Animation & AR Modeling Guide version 3.0 Contents 1. Introduction... 3 2. Face animation modeling guide...4 2.1. Creating blendshapes...4 2.1.1. Blendshape suggestions...5 2.2. Binding configuration...6

More information

Character animation Christian Miller CS Fall 2011

Character animation Christian Miller CS Fall 2011 Character animation Christian Miller CS 354 - Fall 2011 Exam 2 grades Avg = 74.4, std. dev. = 14.4, min = 42, max = 99 Characters Everything is important in an animation But people are especially sensitive

More information

3D Production Pipeline

3D Production Pipeline Overview 3D Production Pipeline Story Character Design Art Direction Storyboarding Vocal Tracks 3D Animatics Modeling Animation Rendering Effects Compositing Basics : OpenGL, transformation Modeling :

More information

Create Natural User Interfaces with the Intel RealSense SDK Beta 2014

Create Natural User Interfaces with the Intel RealSense SDK Beta 2014 Create Natural User Interfaces with the Intel RealSense SDK Beta 2014 The Intel RealSense SDK Free Tools and APIs for building natural user interfaces. Public Beta for Windows available Q3 2014 Accessible

More information

A Scripting Language for Multimodal Presentation on Mobile Phones

A Scripting Language for Multimodal Presentation on Mobile Phones A Scripting Language for Multimodal Presentation on Mobile Phones Santi Saeyor 1, Suman Mukherjee 2, Koki Uchiyama 2, Ishizuka Mitsuru 1 1 Dept. of Information and Communication Engineering, University

More information

lesson 24 Creating & Distributing New Media Content

lesson 24 Creating & Distributing New Media Content lesson 24 Creating & Distributing New Media Content This lesson includes the following sections: Creating New Media Content Technologies That Support New Media Distributing New Media Content Creating New

More information

MPML: A Multimodal Presentation Markup Language with Character Agent Control Functions

MPML: A Multimodal Presentation Markup Language with Character Agent Control Functions MPML: A Multimodal Presentation Markup Language with Character Agent Control Functions Takayuki Tsutsui, Santi Saeyor and Mitsuru Ishizuka Dept. of Information and Communication Eng., School of Engineering,

More information

Animation of 3D surfaces

Animation of 3D surfaces Animation of 3D surfaces 2013-14 Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible

More information

A Taxonomy of Web Agents

A Taxonomy of Web Agents A Taxonomy of s Zhisheng Huang, Anton Eliëns, Alex van Ballegooij, and Paul de Bra Free University of Amsterdam, The Netherlands Center for Mathematics and Computer Science(CWI), The Netherlands Eindhoven

More information

CG: Computer Graphics

CG: Computer Graphics CG: Computer Graphics CG 111 Survey of Computer Graphics 1 credit; 1 lecture hour Students are exposed to a broad array of software environments and concepts that they may encounter in real-world collaborative

More information

Animation Tools THETOPPERSWAY.COM

Animation Tools THETOPPERSWAY.COM Animation Tools 1.) 3D Max: It includes 3D modeling and rendering software. A new Graphite modeling and texturing system(the Graphite Modeling Tools set, also called the modeling ribbon, gives you everything

More information

Towards Audiovisual TTS

Towards Audiovisual TTS Towards Audiovisual TTS in Estonian Einar MEISTER a, SaschaFAGEL b and RainerMETSVAHI a a Institute of Cybernetics at Tallinn University of Technology, Estonia b zoobemessageentertainmentgmbh, Berlin,

More information

An Analysis of Image Retrieval Behavior for Metadata Type and Google Image Database

An Analysis of Image Retrieval Behavior for Metadata Type and Google Image Database An Analysis of Image Retrieval Behavior for Metadata Type and Google Image Database Toru Fukumoto Canon Inc., JAPAN fukumoto.toru@canon.co.jp Abstract: A large number of digital images are stored on the

More information

Using the rear projection of the Socibot Desktop robot for creation of applications with facial expressions

Using the rear projection of the Socibot Desktop robot for creation of applications with facial expressions IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Using the rear projection of the Socibot Desktop robot for creation of applications with facial expressions To cite this article:

More information

Wimba Classroom VPAT

Wimba Classroom VPAT Wimba Classroom VPAT The purpose of this Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary assessments regarding the availability

More information

To Do. Advanced Computer Graphics. The Story So Far. Course Outline. Rendering (Creating, shading images from geometry, lighting, materials)

To Do. Advanced Computer Graphics. The Story So Far. Course Outline. Rendering (Creating, shading images from geometry, lighting, materials) Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 16 Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir To Do Assignment 3 milestone due May 29 Should already be well on way Contact us for difficulties

More information

Course Outline. Advanced Computer Graphics. Animation. The Story So Far. Animation. To Do

Course Outline. Advanced Computer Graphics. Animation. The Story So Far. Animation. To Do Advanced Computer Graphics CSE 163 [Spring 2017], Lecture 18 Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir 3D Graphics Pipeline Modeling (Creating 3D Geometry) Course Outline Rendering (Creating, shading

More information

CSE452 Computer Graphics

CSE452 Computer Graphics CSE452 Computer Graphics Lecture 19: From Morphing To Animation Capturing and Animating Skin Deformation in Human Motion, Park and Hodgins, SIGGRAPH 2006 CSE452 Lecture 19: From Morphing to Animation 1

More information

AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO F ^ k.^

AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO F ^ k.^ Computer a jap Animation Algorithms and Techniques Second Edition Rick Parent Ohio State University AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO

More information

Tracking facial features using low resolution and low fps cameras under variable light conditions

Tracking facial features using low resolution and low fps cameras under variable light conditions Tracking facial features using low resolution and low fps cameras under variable light conditions Peter Kubíni * Department of Computer Graphics Comenius University Bratislava / Slovakia Abstract We are

More information

A Novel Realizer of Conversational Behavior for Affective and Personalized Human Machine Interaction - EVA U-Realizer -

A Novel Realizer of Conversational Behavior for Affective and Personalized Human Machine Interaction - EVA U-Realizer - A Novel Realizer of Conversational Behavior for Affective and Personalized Human Machine Interaction - EVA U-Realizer - IZIDOR MLAKAR 1, ZDRAVKO KAČIČ 1, MATEJ BORKO 2, MATEJ ROJC 1 1 Faculty of Electrical

More information

Pipeline and Modeling Guidelines

Pipeline and Modeling Guidelines Li kewhatyou see? Buyt hebookat t hefocalbookst or e Char act ermodel i ng wi t h Mayaand ZBr ush Jason Pat node ISBN 9780240520346 CH01-K52034.indd viii 12/4/07 1:52:00 PM CHAPTER 1 Pipeline and Modeling

More information

Adobe Authorware 7 as Programming Tool

Adobe Authorware 7 as Programming Tool Adobe Authorware 7 as Programming Tool Vedran Kosovac and Bozidar Kovacic Faculty of Arts and Sciences University of Rijeka Omladinska 14, Rijeka, 51000, Croatia Phone: (385) 51345046 Fax: (385) 51 345207

More information

Narrative Editing of Web Contexts on Online Community System with Avatar-like Agents

Narrative Editing of Web Contexts on Online Community System with Avatar-like Agents Narrative Editing of Web Contexts on Online Community System with Avatar-like Agents Toru Takahashi, & Hideaki Takeda*, Graduate School of Information Science, Nara Institute of Science and Technology

More information

Date: June 27, 2016 Name of Product: Cisco Unified Customer Voice Portal (CVP) v11.5 Contact for more information:

Date: June 27, 2016 Name of Product: Cisco Unified Customer Voice Portal (CVP) v11.5 Contact for more information: Date: June 27, 2016 Name of Product: Cisco Unified Customer Voice Portal (CVP) v11.5 Contact for more information: accessibility@cisco.com The following testing was done on a Windows 7 with Freedom Scientific

More information

Animations. Hakan Bilen University of Edinburgh. Computer Graphics Fall Some slides are courtesy of Steve Marschner and Kavita Bala

Animations. Hakan Bilen University of Edinburgh. Computer Graphics Fall Some slides are courtesy of Steve Marschner and Kavita Bala Animations Hakan Bilen University of Edinburgh Computer Graphics Fall 2017 Some slides are courtesy of Steve Marschner and Kavita Bala Animation Artistic process What are animators trying to do? What tools

More information

Animation Essentially a question of flipping between many still images, fast enough

Animation Essentially a question of flipping between many still images, fast enough 33(70) Information Coding / Computer Graphics, ISY, LiTH Animation Essentially a question of flipping between many still images, fast enough 33(70) Animation as a topic Page flipping, double-buffering

More information

3D on the Web Why We Need Declarative 3D Arguments for an W3C Incubator Group

3D on the Web Why We Need Declarative 3D Arguments for an W3C Incubator Group 3D on the Web Why We Need Declarative 3D Arguments for an W3C Incubator Group Philipp Slusallek Johannes Behr Kristian Sons German Research Center for Artificial Intelligence (DFKI) Intel Visual Computing

More information

MPEG-4 AUTHORING TOOL FOR THE COMPOSITION OF 3D AUDIOVISUAL SCENES

MPEG-4 AUTHORING TOOL FOR THE COMPOSITION OF 3D AUDIOVISUAL SCENES MPEG-4 AUTHORING TOOL FOR THE COMPOSITION OF 3D AUDIOVISUAL SCENES P. Daras I. Kompatsiaris T. Raptis M. G. Strintzis Informatics and Telematics Institute 1,Kyvernidou str. 546 39 Thessaloniki, GREECE

More information

YuJa Enterprise Video Platform Voluntary Product Accessibility Template (VPAT)

YuJa Enterprise Video Platform Voluntary Product Accessibility Template (VPAT) Platform Accessibility YuJa Enterprise Video Platform Voluntary Product Accessibility Template (VPAT) Updated: April 18, 2018 Introduction YuJa Corporation strives to create an equal and consistent media

More information

SMK SEKSYEN 5,WANGSAMAJU KUALA LUMPUR FORM

SMK SEKSYEN 5,WANGSAMAJU KUALA LUMPUR FORM SMK SEKSYEN 5,WANGSAMAJU 53300 KUALA LUMPUR FORM 5 LEARNING AREA 4 MULTIMEDIA Ramadan, SMK Pekan 2007 MULTIMEDIA LESSON 21 MULTIMEDIA CONCEPTS DEFINITION OF MULTIMEDIA Multimedia has been used in many

More information

AUGMENTED REALITY BASED SHOPPING EXPERIENCE

AUGMENTED REALITY BASED SHOPPING EXPERIENCE AUGMENTED REALITY BASED SHOPPING EXPERIENCE Rohan W 1, R R Raghavan 2 1. Student, Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Chennai, Tamil Nadu,

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3. Supporting Features. Not Applicable. Supported with Exceptions. Supported with Exceptions

VPAT. Voluntary Product Accessibility Template. Version 1.3. Supporting Features. Not Applicable. Supported with Exceptions. Supported with Exceptions VPAT Voluntary Product Accessibility Template Version 1.3 Date: 01 August 2014 Name of Product: kuracloud Contact for more Information: John Enlow (Chief Technical Officer) Summary Table Section 1194.21

More information

Computer Animation. Algorithms and Techniques. z< MORGAN KAUFMANN PUBLISHERS. Rick Parent Ohio State University AN IMPRINT OF ELSEVIER SCIENCE

Computer Animation. Algorithms and Techniques. z< MORGAN KAUFMANN PUBLISHERS. Rick Parent Ohio State University AN IMPRINT OF ELSEVIER SCIENCE Computer Animation Algorithms and Techniques Rick Parent Ohio State University z< MORGAN KAUFMANN PUBLISHERS AN IMPRINT OF ELSEVIER SCIENCE AMSTERDAM BOSTON LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO

More information

Animation COM3404. Richard Everson. School of Engineering, Computer Science and Mathematics University of Exeter

Animation COM3404. Richard Everson. School of Engineering, Computer Science and Mathematics University of Exeter Animation COM3404 Richard Everson School of Engineering, Computer Science and Mathematics University of Exeter R.M.Everson@exeter.ac.uk http://www.secamlocal.ex.ac.uk/studyres/com304 Richard Everson Animation

More information

Adding Advanced Shader Features and Handling Fragmentation

Adding Advanced Shader Features and Handling Fragmentation Copyright Khronos Group, 2010 - Page 1 Adding Advanced Shader Features and Handling Fragmentation How to enable your application on a wide range of devices Imagination Technologies Copyright Khronos Group,

More information

Tips on DVD Authoring and DVD Duplication M A X E L L P R O F E S S I O N A L M E D I A

Tips on DVD Authoring and DVD Duplication M A X E L L P R O F E S S I O N A L M E D I A Tips on DVD Authoring and DVD Duplication DVD Authoring - Introduction The postproduction business has certainly come a long way in the past decade or so. This includes the duplication/authoring aspect

More information

Ecma TC43: Universal 3D

Ecma TC43: Universal 3D Ecma/TC43/2004/18 Ecma/GA/2004/68 Ecma TC43: Universal 3D Ecma GA - June 29, 2004 Sanjay Deshmukh, Intel TC43 Chair Sanjay Deshmukh, Intel Corp. Ecma GA June 29, 2004 1 Agenda Problem Statement Why Universal

More information

LATIHAN Identify the use of multimedia in various fields.

LATIHAN Identify the use of multimedia in various fields. LATIHAN 4.1 1. Define multimedia. Multimedia is the presentation of information by using a combination of text, audio, graphic, video and animation. Multimedia has played an important role in other fields,

More information

Animation by Adaptation Tutorial 1: Animation Basics

Animation by Adaptation Tutorial 1: Animation Basics Animation by Adaptation Tutorial 1: Animation Basics Michael Gleicher Graphics Group Department of Computer Sciences University of Wisconsin Madison http://www.cs.wisc.edu/graphics Outline Talk #1: Basics

More information

Voluntary Product Accessibility Template (VPAT) Applicable Sections

Voluntary Product Accessibility Template (VPAT) Applicable Sections Voluntary Product Accessibility Template (VPAT) Name of Product Reaxys Date January 6, 2014 Completed by Jack Bellis, Elsevier UCD, Philadelphia Contact for more Information Product Version Number 2.15859.10

More information

New Features. Importing Resources

New Features. Importing Resources CyberLink StreamAuthor 4 is a powerful tool for creating compelling media-rich presentations using video, audio, PowerPoint slides, and other supplementary documents. It allows users to capture live videos

More information

DTask & LiteBody: Open Source, Standards-based Tools for Building Web-deployed Embodied Conversational Agents

DTask & LiteBody: Open Source, Standards-based Tools for Building Web-deployed Embodied Conversational Agents DTask & LiteBody: Open Source, Standards-based Tools for Building Web-deployed Embodied Conversational Agents Timothy Bickmore, Daniel Schulman, and George Shaw Northeastern University College of Computer

More information

Intel Authoring Tools for UPnP* Technologies

Intel Authoring Tools for UPnP* Technologies Intel Authoring Tools for UPnP* Technologies (Version 1.00, 05-07-2003) INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE,

More information

DIABLO VALLEY COLLEGE CATALOG

DIABLO VALLEY COLLEGE CATALOG ART DIGITAL MEDIA ARTDM Toni Fannin, Dean Applied and Fine Arts Division Business and Foreign Language Building, Room 204 Possible career opportunities Digital media or graphic design jobs cover all ends

More information

PERSONALIZED FACE ANIMATION IN SHOWFACE SYSTEM. Ali Arya Babak Hamidzadeh

PERSONALIZED FACE ANIMATION IN SHOWFACE SYSTEM. Ali Arya Babak Hamidzadeh PERSONALIZED FACE ANIMATION IN SHOWFACE SYSTEM Ali Arya Babak Hamidzadeh Dept. of Electrical & Computer Engineering, University of British Columbia, 2356 Main Mall, Vancouver, BC, Canada V6T 1Z4, Phone:

More information

Three-Dimensional Computer Animation

Three-Dimensional Computer Animation Three-Dimensional Computer Animation Visual Imaging in the Electronic Age Donald P. Greenberg November 29, 2016 Lecture #27 Why do we need an animation production pipeline? Animated full-length features

More information

Using Classical Mechanism Concepts to Motivate Modern Mechanism Analysis and Synthesis Methods

Using Classical Mechanism Concepts to Motivate Modern Mechanism Analysis and Synthesis Methods Using Classical Mechanism Concepts to Motivate Modern Mechanism Analysis and Synthesis Methods Robert LeMaster, Ph.D. 1 Abstract This paper describes a methodology by which fundamental concepts in the

More information

The ExtReAM Library: Extensible Real-time Animations for Multiple Platforms

The ExtReAM Library: Extensible Real-time Animations for Multiple Platforms 1 The ExtReAM Library: Extensible Real-time Animations for Multiple Platforms Pieter Jorissen, Jeroen Dierckx and Wim Lamotte Interdisciplinary institute for BroadBand Technology (IBBT) Expertise Centre

More information

Rendering Grass with Instancing in DirectX* 10

Rendering Grass with Instancing in DirectX* 10 Rendering Grass with Instancing in DirectX* 10 By Anu Kalra Because of the geometric complexity, rendering realistic grass in real-time is difficult, especially on consumer graphics hardware. This article

More information

Chapter 8 Visualization and Optimization

Chapter 8 Visualization and Optimization Chapter 8 Visualization and Optimization Recommended reference books: [1] Edited by R. S. Gallagher: Computer Visualization, Graphics Techniques for Scientific and Engineering Analysis by CRC, 1994 [2]

More information

Cross-platform platform.

Cross-platform platform. Cross-platform platform www.libretro.com RetroArch A cross-platform architecture The reference frontend to an API An app library/ecosystem of its own A no-strings-attached enduser program A project with

More information

Lesson 5: Multimedia on the Web

Lesson 5: Multimedia on the Web Lesson 5: Multimedia on the Web Learning Targets I can: Define objects and their relationships to multimedia Explain the fundamentals of C, C++, Java, JavaScript, JScript, C#, ActiveX and VBScript Discuss

More information

Tata Elxsi benchmark report: Unreal Datasmith

Tata Elxsi benchmark report: Unreal Datasmith This report and its findings were produced by Tata Elxsi. The report was sponsored by Unity Technologies. Tata Elxsi benchmark report: comparing PiXYZ Studio and Unreal Datasmith A Tata Elxsi perspective

More information

Visualization of Manufacturing Composite Lay-up Technology by Augmented Reality Application

Visualization of Manufacturing Composite Lay-up Technology by Augmented Reality Application Visualization of Manufacturing Composite Lay-up Technology by Augmented Reality Application JOZEF NOVAK-MARCINCIN, JOZEF BARNA, LUDMILA NOVAKOVA-MARCINCINOVA, VERONIKA FECOVA Faculty of Manufacturing Technologies

More information

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Glossary A B C D E F G H I J K L M N O P Q R S T U V W X Y Z A App See Application Application An application (sometimes known as an app ) is a computer program which allows the user to perform a specific

More information

Hardware Displacement Mapping

Hardware Displacement Mapping Matrox's revolutionary new surface generation technology, (HDM), equates a giant leap in the pursuit of 3D realism. Matrox is the first to develop a hardware implementation of displacement mapping and

More information

High Level Graphics Programming & VR System Architecture

High Level Graphics Programming & VR System Architecture High Level Graphics Programming & VR System Architecture Hannes Interactive Media Systems Group (IMS) Institute of Software Technology and Interactive Systems Based on material by Dieter Schmalstieg VR

More information

Voluntary Product Accessibility Template (VPAT)

Voluntary Product Accessibility Template (VPAT) Voluntary Product Accessibility Template (VPAT) Date 2017-02-06 Name of Product Top Hat Lecture - Student - Android App Version Contact Steve Pascoe steve.pascoe+vpat@tophat.com Summary Table Criteria

More information

WebGL Meetup GDC Copyright Khronos Group, Page 1

WebGL Meetup GDC Copyright Khronos Group, Page 1 WebGL Meetup GDC 2012 Copyright Khronos Group, 2012 - Page 1 Copyright Khronos Group, 2012 - Page 2 Khronos API Ecosystem Trends Neil Trevett Vice President Mobile Content, NVIDIA President, The Khronos

More information

Animation. Representation of objects as they vary over time. Traditionally, based on individual drawing or photographing the frames in a sequence

Animation. Representation of objects as they vary over time. Traditionally, based on individual drawing or photographing the frames in a sequence 6 Animation Animation Representation of objects as they vary over time Traditionally, based on individual drawing or photographing the frames in a sequence Computer animation also results in a sequence

More information

Copyright Khronos Group Page 1. Vulkan Overview. June 2015

Copyright Khronos Group Page 1. Vulkan Overview. June 2015 Copyright Khronos Group 2015 - Page 1 Vulkan Overview June 2015 Copyright Khronos Group 2015 - Page 2 Khronos Connects Software to Silicon Open Consortium creating OPEN STANDARD APIs for hardware acceleration

More information

TRIBHUVAN UNIVERSITY Institute of Engineering Pulchowk Campus Department of Electronics and Computer Engineering

TRIBHUVAN UNIVERSITY Institute of Engineering Pulchowk Campus Department of Electronics and Computer Engineering TRIBHUVAN UNIVERSITY Institute of Engineering Pulchowk Campus Department of Electronics and Computer Engineering A Final project Report ON Minor Project Java Media Player Submitted By Bisharjan Pokharel(061bct512)

More information

Facial Animation System Based on Image Warping Algorithm

Facial Animation System Based on Image Warping Algorithm Facial Animation System Based on Image Warping Algorithm Lanfang Dong 1, Yatao Wang 2, Kui Ni 3, Kuikui Lu 4 Vision Computing and Visualization Laboratory, School of Computer Science and Technology, University

More information

Streaming Media. Advanced Audio. Erik Noreke Standardization Consultant Chair, OpenSL ES. Copyright Khronos Group, Page 1

Streaming Media. Advanced Audio. Erik Noreke Standardization Consultant Chair, OpenSL ES. Copyright Khronos Group, Page 1 Streaming Media Advanced Audio Erik Noreke Standardization Consultant Chair, OpenSL ES Copyright Khronos Group, 2010 - Page 1 Today s Consumer Requirements Rich media applications and UI - Consumer decisions

More information