International Journal of Automation and Computing 04(1), January 2007, 25-29 DOI: 10.1007/s11633-007-0025-4 Generation and Control of Game Virtual Environment Myeong Won Lee 1 Jae Moon Lee 2 1 Department of Internet Information Engineering, The University of Suwon, Gyeonggi-do 445-743, Korea 2 Department of Computers, The University of Suwon, Gyeonggi-do 445-743, Korea Abstract: In this paper, we present a framework for the generation and control of an Internet-based 3-dimensional game virtual environment that allows a character to navigate through the environment. Our framework includes 3-dimensional terrain mesh data processing, a map editor, scene processing, collision processing, and walkthrough control. We also define an environment-specific semantic information editor, which can be applied using specific location obtained from the real world. Users can insert text information related to the characters real position in the real world during navigation in the game virtual environment. Keywords: Virtual environment, virtual reality, 3D game, 3D navigation, 3D scene management. 1 Introduction A series of processes is needed to develop an Internetbased game using 3D virtual environments. The client application program which provides the end-user interface includes game logic and engine modules amongst its processes [1]. The game logic module has game progress procedures and is not related to how objects such as characters, buildings, and terrains are displayed, but to what objects are displayed. In other words, it controls overall program progress and is in dependent of the computer operating system or application programming interface (API). In order to provide this independence, a game engine module is required as an interface to the operating system and the API [2,3]. The game engine module is in charge of a variety of tasks necessary for the behavior and representation of objects that appear in the game logic application [4,5]. The game engine module is composed of several sub modules which provide a rendering library [6,7], terrain management, object and resource management, application framework, and networking function that are necessary for a game program. Our research has focused on the game engine module, and on providing a method of controlling game progress in real time. We present an overall framework for generating and controlling 3D virtual environments commonly necessary for developing game applications. We also introduce a method of controlling objects by a console command interface and describe the kinds of commands for controlling them. In addition, we describe the method for generating environment semantic information [8,9]. 2 Mesh data management Normally, we use a 3D modeler when generating a character or a virtual environment in development of game software. We may import 3D geometric data from the 3D modeler since it often provides an export function that outputs textual mesh information. We used Autodesk 3ds Max to obtain the geometric data for characters and virtual environments used in our game. It provides ASCII scene ex- Manuscript received August 3, 2006; revised December 15, 2006. *Corresponding author. E-mail address: mwlee@suwon.ac.kr porter (ASE) which is capable of obtaining such mesh information in text files (see Fig. 1). Although it is easy to handle and modify the data, there is a disadvantage in that ASE includes unnecessary information that is not used directly by the application. For example, for a 3D model of a virtual environment representing an ancient palace, using ASE, we obtained 12 megabytes of mesh data. Such a large mesh file may cause problems when implementing the game. It may take a long time for the file to be loaded and transformed in the reorganization for graphics libraries that utilize it when developing the game. Therefore, we have devised a binary format named MSH whose file size is smaller than using ASE, and which also provides considerable performance improvement in the mesh transformation (see Fig. 1). Usually, the parsing process takes a significant amount of time when loading mesh information, and this can be avoided if we use MSH files. Our mesh management module provides the function of transforming an ASE file to an MSH file so that client applications can use it. 3 Scene management Generally, there are several different kinds of scenes in a game, such as logo, main menu, intro, and game progressing scenes. These scenes need a certain logic to manage separately per scene since they are independent of each other. Scenes may be changed from one to another, and each needs processes to run when being started and closed. A scene management system processes a life cycle of scenes from start to finish (see Fig. 2). The system provides two classes, SceneManager and Scene. The Scene class is derived, and then five kinds of overloaded functions - Initialize, Dispose, Begin, End, and DoFrame - are defined and registered in the SceneManager. The SceneManager calls the scenes using functions according to the life cycle. The SceneManger identifies registered scenes by integer numbers, and uses these numbers when activating or deleting scenes. Two states exist per scene. One is registered but inactive, and the other is active among registered scenes. Several scenes can be registered, but only one scene can exist in the SceneManager. The active scene is changed into an inactive scene if we try to activate an-
26 International Journal of Automation and Computing 04(1), January 2007 Fig. 1 Mesh data transformation editor is required for characters to have motion according to the game scenario and to process collisions that may occur during navigation in the virtual environment [11]. The editor makes it easy for a map designer to author virtual environments. The map designer can author the following two kinds of spaces. One is an indoor space that can be generated by traditional graphics tools and then read into the game program. We refer to this as a game engine. The indoor space is stored in the game engine with optimized data structures. A static object can be placed into the internal space using the tool. In addition, in order to render the internal space more effectively, binary space partitioning (BSP) and potentially visible set (PVS) are included with the game engine. The other kind of space the map designer can author is an outdoor space. It can be generated by setting up the heights and textures for terrain tiles, random terrain generation, or raw bitmap files. We can place a static object in the outdoor space as in the indoor space using the map editor. The map editor uses ROAM technology to represent and manage a large outdoor space. 6 Collision processing Fig. 2 Life cycle of a scene other scene while the active scene is working. 4 3D terrain mesh generation We consider buildings, natural features of the earth, and terrains [10] as components for organizing a virtual environment. We generate the buildings and the natural features of the earth by using an external graphics tool and manage them with the mesh management module. The terrains, however, are not generated using such a tool. Instead, they are rendered by a special method called realtime optimally adapting meshes (ROAM) [7]. ROAM is an algorithm for implementing level of detail (LOD) when representing terrains. It provides the function of controlling the number of polygons dynamically. ROAM performs tessellation which divides each mesh into triangles for each frame. Sometimes, a mesh breaks if the level of tessellation for each area is different when an area is tessellated in the mesh. ROAM addresses this problem by also tessellating a neighbor triangle whenever a triangle is tessellated. 5 Map editor In game development, it is difficult to generate a series of variable scenes continuously for providing game contents after constructing virtual environments including characters and backgrounds using commercial graphics tools. A map A collision processing method must be provided for a game to resolve problems that occur when objects collide during navigation. We used octree representation for detecting collisions, since detecting collisions polygon by polygon is ineffective. Using this method, spaces where some part of the target object does not exist are exempted from collision detection. Then, axis aligned bounding box (AABB), which is the bounding box for a mesh, is applied, and is paralleled with the XYZ axes of the coordinate system in the virtual environment [12]. The collision detection is not accomplished at each polygon but at the bounding box surrounding an object. In addition, we applied a polygon to the bounding box collision after detecting the bounding box collision. We used a separated axis when detecting collisions between a polygon and a bounding box. A separated axis is one that separates two primitives composing objects, and their projected lines do not overlap, as shown in Fig. 3. For example, the projected primitives on an axis do not overlap if they are apart from each other. Therefore, we can determine that two primitives have not collided if such an axis exists. And, on the other hand, two primitives have collided if such an axis does not exist. In our system, we will not allow a character to proceed anymore in the virtual environment when it has been determined that a collision occurred. 7 Environment semantic information editor Generally, many games treat virtual environments as independent of characters, and are focused on modeling and rendering representation. They do not consider information within the environment, or how the environment relates to the characters. However, we have defined relative semantic information with real position in the virtual environment (see Fig. 4). Characters are able to recognize the semantic information where they pass through.
M. W. Lee and J. M. Lee/ Generation and Control of Game Virtual Environment 27 Fig. 3 Separated axis Fig. 5 Environment semantic information editor Fig. 4 Environment semantic information Fig. 6 Classes for our game virtual environment framework The virtual environments in our research can contain semantic information specific to an object s geographical location. In order to synchronize geographical location and semantic information, a function is necessary to input semantic information into the location that a character walks through. So, we added an editor to input location-specific information where some explanatory description is required. In the editor, the virtual environment is displayed in a 2- dimensional scene, where objects are represented by x and z coordinate values. Semantic information can be inserted using a dialog box. In Fig. 4, the game client system requests environment semantic information from the server. Then, the server retrieves the information from the database and the editor stores what the user has chosen to insert. Fig. 5 shows the environment semantic information editor. We included the editor since an information editor is even more of a necessity when we consider mobile game programs with information service in a ubiquitous environment. 8 Game control interface We used the method of designing a facade pattern when our game system executes a job and modifies an attribute of the virtual environment. The facade pattern is the same concept described in object-oriented languages. An inter- face is required to access the facade class for providing control functions during game execution. To accomplish this, we developed a console library, and then provided character user interface (CUI) for users to access the game system. In other words, the console is a user interface that can implement various control functions of the game system during execution. There are advantages that users can implement the functions themselves during execution of a game in a functional point of view, and that it can execute functions collectively in a systematic point of view. Various setup functions can be batched in separate files, and then we can modify setup controls flexibly. We have defined and implemented several console commands as shown in Table 1. Setup functions include delay time for keystroke input, console flag to show, rendering mode selection for polygon filling, gravity assignment, and semantic information retrieval that can be used in the console window. Read and write functions are also used to retrieve and store game progress during game play. Several other functions are also provided. 9 Implementation results We implemented the overall system by using the Visual C++ programming language, Microsoft DirectX 9, and 3ds
28 International Journal of Automation and Computing 04(1), January 2007 Table 1 Console commands Command Parameter Description set key delay Set the delay time of keystroke. set console show Set the flag for a console interface. set rs fill Specify rendering mode fill option. set s gravity Specify the gravity amount. set info Read the map information. set play cam record Apply view according to stored camera information. set record cam Record current camera information. set play cam speed Specify the speed of camera movement. write cam Store camera information. read cam Read the stored camera information. exec file path Execute the commands stored in the batch file stored at the path. exit none or cam records Close the console or delete the camera information. Fig. 7 Game samples based on our game VR framework Max. Fig. 6 briefly describes several kinds of classes composing our game system. The system starts from Ruin Application program. It manages the loop of the game s procedures. Ruin System manages all the functions that provide the game to the user. Ruin PhysicsSystem manages the interaction between characters and surrounding environments. It includes the collision detection and response algorithms. Ruin Frame is a class for generating and managing windows using Win32 API. Ruin Renderer manages functions and devices related to Direct3D for rendering 3D characters and environments. Ruin Console manages the console interface for controlling game setup such as key delay, console show, set gravity, info, record camera, and so on, described in Section 8. Ruin InputMap enables input of text information and provides the functions of the environment semantic information editor. Ruin SceneManager manages various scenes according to the users selection or the game progress. Ruin Scene is an abstract class to define each scene. It is derived and then used for implementing the contents of a game application. Fig. 7 shows sample screen shots from the system. 10 Conclusions In this paper, we presented a framework for generating and controlling 3D virtual environments in the development of an Internet game. The framework includes the generation of virtual environments, control of character movement, a real-time virtual environment information generator, and a console interface for controlling game progress. In the framework organization, we defined a systematic game programming procedure, including the environment semantic information editor. We focused on 3D characters and environments that affect the visual representation of a game. Compared with conventional game engines, our system has the following advantages: First, the framework provides a total solution for generating and controlling 3D objects in a game. Second, we have solved the collision problems be-
M. W. Lee and J. M. Lee/ Generation and Control of Game Virtual Environment 29 tween characters or environments during game progress by combining and improving conventional collision detection algorithms. Third, the framework has the capability of controlling a game progress using the convenient console command interface that we have developed. Game progress can be stored partially or totally and also retrieved whenever it is needed for replay. Several convenient setup functions are also provided. Lastly, our framework includes the environment semantic information editor that can insert valuable information into specific locations, enhancing the usability of the virtual environment. Future work will include a client system to be applied to mobile environments so that such environment semantic information can be collected in a ubiquitous virtual environment. References [1] C. Faisstnauer, W. Purgathofer, M. Gervautz, J. D. Gascuel. Construction of an Open Geometry Server for Client-server Virtual Environments. In Proceedings of IEEE Conference on Virtual Reality 2001, Yakohama, Japan, pp. 105 114, 2001. [2] L. Bishop, B. Eberty, T. Whitted, M. Finch, M. Shantz. Designing a PC Game Engine. IEEE Computer Graphics and Applications, vol. 18, no. 1, pp. 46 53, 1998. [3] R. Darken, P. McDowell, E. Johnson. The Delta3D Open Source Game Engine. IEEE Computer Graphics and Applications, vol. 25, no. 3, pp. 10 12, 2005. [4] K. Kanev, S. Kimura. Integrating Dynamic Full-body Motion Devices in Interactive 3D Entertainment. IEEE Computer Graphics and Applications, vol. 22, no. 4, pp. 76 86, July 2002. [5] T. K. Capin, H. Noser, D. Thalmann, I. S. Pandzic, N. M. Thalmann. Virtual Human Representation and Communication in VLNET. IEEE Computer Graphics and Applications, vol. 17, no. 2, pp. 42 53, 1997. [6] F. D. Luna. Introduction to 3D Game Programming with DirectX 9.0, Paperback, USA, 2003. [7] T. Moller, E. Haines, T. A. Moller. Real-time Rendering, 2nd ed., A K Peters Ltd., USA, 2002. [8] M. E. Latoschik, P. Biermann, I. Wachsmuth. High-Level Semantics Representation for Intelligent Simulative Environments, In Proceedings of IEEE Conference on Virtual Reality 2005, Bonn, Germany, pp. 283 284, 2005. [9] D. A. Bowman, C. North, J. Chen, N. F. Polys, P. S. Pyla, U. Yilmaz. Information-rich Virtual Environments: Theory, Tools, and Research Agenda. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology 2003, Osaka, Japan, pp. 81 90, October 01 03, 2003. [10] G. Snook. Real-Time 3D Terrain Engines Using C++ and DirectX 9, Paperback, USA, 2003. [11] R. P. Darken, J. L. Sibert. A Toolset for Navigation in Virtual Environments. In Proceedings of the 6th Annual ACM Symposium on User Interface Software and Technology, Georgia, USA, pp. 157 165, 1993. [12] A. M. Tomas. Fast 3D Triangle-box Overlap Testing. Journal of Graphics Tools, vol. 6, no. 1, pp. 29 33, 2001. communication. Myeong Won Lee received her B. Sc. from Seoul National University, Korea, in 1981, and the M. Sc. degree from Seoul National University, in 1984 and the Ph. D. degree from the University of Tokyo, Japan, in 1990. She is currently an associate professor at the Dept. of Internet Information Engineering, the University of Suwon. Her research interests include computer graphics applications, Web-based virtual reality, computer animation, and multimedia Jae Moon Lee received his B. Sc. from the University of Suwon, Korea, in 2005, and was a researcher of VR&M Laboratory at the University of Suwon. He is currently a researcher at the ESTsoft Corp. His research interests include computer graphics applications and Web-based virtual reality.