(UR-15) Interaction Models for Infrastructure Management Systems Using Virtual and Augmented Realities

Similar documents
(UR-16) Integrated Framework for Lifecycle Infrastructure Management Systems

CONVR2006 Integration and Visualization Issues in Large-Scale Location- Based Facilities Management Systems

Spatio-Temporal Issues in Infrastructure Lifecycle Management Systems

Virtual Reality Models for Location-Based Facilities Management systems

Construction of Complex City Landscape with the Support of CAD Model

A DBMS-BASED 3D TOPOLOGY MODEL FOR LASER RADAR SIMULATION

The Most Comprehensive Solution for Indoor Mapping Applications

Integration of Augmented Reality and indoor positioning technologies for on-site viewing of BIM information

3D in the ArcGIS Platform. Chris Andrews

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

People- and Environment-friendly Urban Development Utilizing Geospatial Information

ArcGIS Runtime: Building 3D Apps. Rex Hansen Adrien Meriaux

Mobile Augmented Reality

Trimble VISION Positions from Pictures

Localization of Wearable Users Using Invisible Retro-reflective Markers and an IR Camera

Towards Virtual Reality GIS

Rapid Modeling of Digital City Based on Sketchup

DATA FUSION FOR MULTI-SCALE COLOUR 3D SATELLITE IMAGE GENERATION AND GLOBAL 3D VISUALIZATION

TIMMS : FAST, ACCURATE & COST-EFFECTIVE INDOOR MAPPING

Introduction to InfraWorks 360 for Civil

An Auto-Stereoscopic VRML Viewer for 3D Data on the World Wide Web

Visualization and modeling of traffic congestion in urban environments

ROAD SURFACE STRUCTURE MONITORING AND ANALYSIS USING HIGH PRECISION GPS MOBILE MEASUREMENT SYSTEMS (MMS)

CONTENT ENGINEERING & VISION LABORATORY. Régis Vinciguerra

DYNAMIC PLANNING FOR SITE LAYOUT

Outdoor Augmented Reality Spatial Information Representation

Camera Calibration for a Robust Omni-directional Photogrammetry System

The Retreat 3. A picture is worth a thousand words Napoleon Bonaparte. The CAD Academy Permission to copy

Research on the key technologies and realization of virtual campus

Trimble Mobile Mapping Portfolio

Smart Waste Management using Internet-of-Things (IoT)

Oblique aerial imagery in the praxis: applications and challenges

Handheld Augmented Reality. Reto Lindegger

From Cadastres to Urban Environments for 3D Geomarketing

Aalborg Universitet. Just-In-Place Information for Mobile Device Interfaces Kjeldskov, Jesper. Published in: Paterno, F. (ed.)

Augmenting Reality with Projected Interactive Displays

A Wearable Augmented Reality System Using an IrDA Device and a Passometer

Automated image orientation in a location aware environment

ANALYSIS AND RESOLUTION OF EQUIPMENT WORKSPACE CONFLICTS

Trimble Indoor Mobile Mapping Solution

MOBILE COMPUTING 2/11/18. Location-based Services: Definition. Convergence of Technologies LBS. CSE 40814/60814 Spring 2018

Computer Graphics 1. Chapter 9 (July 1st, 2010, 2-4pm): Interaction in 3D. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2010

Automated Processing for 3D Mosaic Generation, a Change of Paradigm

On Design of 3D and Multimedia Extension of Information System Using VRML

THE ARCHITECTURE DEVELOPMENT FOR THE INTEROPERABILITY BETWEEN BIM AND GIS 1

Introduction. Optimized Organization and Adaptive Visualization Method. Visualization Experiments and Prototype System Implementation

Introduction to Information Technology Turban, Rainer and Potter John Wiley & Sons, Inc. Copyright 2005

What s New in ecognition 9.0

Halifax Regional Municipality

Remote Reality Demonstration

Automated Road Segment Creation Process

Development of Geospatial Smart Cities and Management

Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

Mixed-Reality for Intuitive Photo-Realistic 3D-Model Generation

CS 4758: Automated Semantic Mapping of Environment

A Novel Video Retrieval Method to Support a User's Recollection of Past Events Aiming for Wearable Information Playing

From 2D to 3D at Esri

A Mobile Augmented Reality Demonstrator. Andrew Radburn 1

Networked Telepresence System Using Web Browsers and Omni-directional Video Streams

BIM + Design Technology HDR, Inc., all rights reserved.

CSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.

New Trends in Construction Site Safety and Management Based on Virtual Environments

CityEngine: An Introduction. Eric Wittner 3D Product Manager

CORS networks today and tomorrow latest improvements and applications

AUTOMATIC GENERATION OF 3-D BUILDING MODELS FROM BUILDING POLYGONS ON GIS

Translating your CAD Model to a VRML Model

Toward Spatial Queries for Spatial Surveillance Tasks

10 DIJKSTRA S ALGORITHM BASED ON 3D CAD NETWORK MODULE FOR SPATIAL INDOOR ENVIRONMENT

DATA FUSION AND INTEGRATION FOR MULTI-RESOLUTION ONLINE 3D ENVIRONMENTAL MONITORING

Creating learning activities using Augmented Reality tools

Using GIS in Designing and Deploying Wireless Network in City Plans

Integrated Field to Finish BIM for Surveyors. Boris Skopljak Market Manager, Trimble Geospatial Office Software

SURVEY 3.0. Innovative field solutions for Surveyors X PAD. works when you do

AN ITERATIVE ALGORITHM FOR MATCHING TWO ROAD NETWORK DATA SETS INTRODUCTION

2.1 Traditional media: sketching and modeling

A New Dimension to Land Development and Subdivision Design: Application of AutoCAD Civil 3D

Creating, managing and utilizing a 3D Virtual City in ArcGIS Tamrat Belayneh Eric Wittner

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS

A DIGITAL MULTI CCD CAMERA SYSTEM FOR NEAR REAL-TIME MAPPING

Photogrammetry and 3D city modelling

12/3/2009. What is Computer Vision? Applications. Application: Assisted driving Pedestrian and car detection. Application: Improving online search

TR17699 InfraWorks 360 in Motion: Getting Things Moving in Your Model Willy Campbell Autodesk

What s New in TerraExplorer Suite 5.0

A Method for Representing Thematic Data in Three-dimensional GIS

Low Latency Rendering and Positioning for Mobile Augmented Reality

Analysis of ARToolKit Fiducial Markers Attributes for Robust Tracking

BIM Enabled Decision Making for in-building Rescue Missions. Albert Y. Chen 1 and Ting Huang 2

Frequently Asked Questions for GeoMedia 3D Customers

Microsoft HoloLens Joe Hines

FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS

Interactive 3D Geometrical Modelers for Virtual Reality and Design. Mark Green*, Jiandong Liang**, and Chris Shaw*

Building Augmented You-are-here Maps through Collaborative Annotations for the Visually Impaired

Developing a Reference Model for Augmented Reality. 5th International AR Standards Community Meeting 19 March 2012

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

X PAD for Android. The first, the most advanced. Designed and developed by: X PAD

Unwrapping of Urban Surface Models

We present a method to accelerate global illumination computation in pre-rendered animations

Middleware for Ubiquitous Computing

AUGMENTED REALITY AS A METHOD FOR EXPANDED PRESENTATION OF OBJECTS OF DIGITIZED HERITAGE. Alexander Kolev, Dimo Dimov

Experiments on Generation of 3D Virtual Geographic Environment Based on Laser Scanning Technique

Transcription:

(UR-15) Interaction Models for Infrastructure Management Systems Using Virtual and Augmented Realities Elaheh Mozaffari 1, Bechir Khabir 1, Cheng Zhang 2, Prasanna Devarakonda 2, Prasad Bauchkar 2 Hammad 3 and Amin 1. Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada 2. Department of Building, Civil & Environmental Engineering, Concordia University, Montreal, QC, Canada 3. Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, Canada 1. Introduction Future infrastructure lifecycle management systems have several requirements, such as integrating lifecycle data and providing access to databases, supporting 4D visualization and interaction, representing virtual spaces and different scales of space and time, adapting construction related standards, and providing support for spatio-temporal analysis. Using these systems in mobile situations will allow on-site infrastructure field workers, such as construction superintendents and bridge inspectors, to use mobile and wearable computers to interact with geo-referenced spatial models of the infrastructure and to automatically retrieve the necessary information in real time based on their location, orientation, and specific task context using Virtual Reality (VR) or Augmented Reality (AR) techniques. AR allows interaction with 3D virtual objects and other types of information (e.g. text) superimposed over 3D real objects in real time. The augmentation can be realized by looking at the real world through a see-through head-mounted display equipped with sensors that accurately track head movements to register the virtual objects with the real objects in real time. In these systems, field workers will be able to access and update information related to their tasks in the field with minimum efforts spent on the interaction with the system, which results in increasing their efficiency and potentially their safety. VR and AR environments can be considered as two cases of the concept of Mixed Reality (MR) introduced by Milgram et al. (1994) where different combinations of the virtual and real components are possible along a virtuality continuum (Figure 1). In order to realize the virtual components, information about the objects shapes and locations is organized in a database using a geospatial model so that augmenting of the 3D real scene with data extracted from the database is possible. 3D models of urban and infrastructure environments can be used as VR models or added as augmentation to the real environment in order to analyze different scenarios, such as urban planning, emergency response simulation, or virtual reconstruction of sites. Mixed Reality Real Environment Augmented Reality (AR) Augmented Virtuality (AV) Figure 1: Virtuality continuum (Milgram et al.,1994) Virtual Environment This paper discusses several issues related to interaction models for infrastructure management systems using virtual and augmented realities. The paper focuses on new VR and AR interaction models that have been developed especially to suit the requirements of mobile infrastructure management systems. These models will be discussed within a larger framework called Location-Based Computing for Infrastructure field tasks (LBC-Infra). LBC-Infra facilitates 1

collecting inspection data by allowing field workers to interact with geo-referenced infrastructure models and automatically retrieve the necessary information in real time based on their location and orientation, and the task context (Hammad et al., 1999; Hammad et al., 2004). 2. 4D Models and Interaction Components of LBC-Infra LBC-Infra integrates 4D models, tracking technologies, mobile computing and distributed wireless communication in one framework. This integration of space and time in 4D models will result in the following advantages: (1) Visualizing different types of data; (2) Providing a user-friendly interface which can reduce data input errors; (3) Facilitating data sharing; and (4) Improving the efficiency of database management. 4D visualization can be understood more quickly and completely than the traditional management tools (Fischer, 2001). In large-scale infrastructure projects, Geographical Information Systems (GIS) are inevitably needed for generating information that relates to locations. Spatial interactions would not be understood fully if they were not linked to geographical locations as perceived in the real world (Zlatanova et al., 2002). Therefore, the 4D models should be located on a 3D map. LBC-Infra has the following main user interface components (Barrilleaux, 2001): visualization and feedback, control, access, navigation, manipulation, and collaboration. The following interaction patterns for each component are typical examples that have been identified based on common tasks that field workers usually perform and the type of information they collect. 2.1 Visualization and feedback (1) Displaying graphical details: LBC-Infra displays to the field worker, structural details retrieved from previous inspection reports. This can happen in a proactive way based on spatial events. For example, once a cracked element is within an inspector field-of-view, the system displays the cracks on that element discovered during previous inspections. This will help focus the inspector s attention on specific locations. The user of the system can control the Levels of Details (LODs) of representing objects depending on his or her needs. (2) Displaying non-graphical information and instructions: The user interface can provide links to documents related to the project, such as reports, regulations and specifications. In addition, LBC-Infra allows for displaying context sensitive instructions on the steps involved in a specific task, such as instructions about the method of checking new cracks, and measuring crack size and crack propagation. 2.2 Control LBC-Infra interprets the user input differently depending on the selected feature and the context. For example, clicking the pointing device can result in selecting a menu option or in picking an object from the 3D virtual world depending on where the user clicked. 2.3 Access Accessing data in LBC-Infra can be achieved through the real-time location tracking. The user just needs to walk towards an object of interest then click on it or stand in front of it for a short period of time. The system tracks his/her location and updates the view point accordingly. Then relevant data will be retrieved from the database and displayed. 2

2.4 Navigation As an extension to conventional navigation systems based on 2D maps, LBC-Infra can also present navigation information in 3D. Within a specific field task, the system can guide a field worker by providing him/her with navigation information and focusing his or her attention on the next element to be inspected. 2.5 Manipulation Once an object is found, the inspector can add inspected defects by directly editing their shapes in 3D and manipulating them using a pointing device. 2.6 Collaboration LBC-Infra facilitates wireless communications among a team of field workers, geographically separated in the project site, by establishing a common spatial reference about the site of the project. In some cases, the field workers may collaborate with an expert engineer stationed at the office who monitors the same scene generated by the mobile unit in the field. Infrastructure systems Field Worker 8: Perform task Tracking devices 10: Update data :Model : Task Database : Attribute Database :Spatial Database 4: Select next task :Controller Client-server communications 11: Distributed notification 2: Remote data access SERVER :Controller Update viewpoint Update navigation Update selection menus Capture spatial events 6: Present task information 7:Present previous information 3, 12: Update View MOBILE CLIENT 1: Start application 5: Confirm next task :View :Tracker Read tracking Figure 2. Collaboration Diagram between the Entities of the Framework data Update current coordinates 9: Capture input events 3. Interaction Model An interaction model identifies a collection of high-level reusable software objects and their relationships. The generic structure of this model embodies the general functionalities of LBC-Infra so that it can be extended and customized to create more specific applications. Figure 2 shows the relationship between the high-level objects of LBC-Infra using a Unified Modelling Language (UML) collaboration diagram, where objects interact with each other by sending messages. The numbers in the diagram refer to the order of execution of the messages. Messages that are not numbered are threads that are executed at the beginning of the application and run continuously and concurrently with other messages, or they are event-driven messages that may occur at any time. 3

4. Computational Aspects of the Framework 4.1 Real-time Navigation Guidance In LBC-Infra, the navigation guidance is provided through animated 3D arrows showing the user the path to the object of interest. The system gets new locations from the tracking devices in real time and changes the view platform accordingly (Figure 3). Then, it adds an arrow that moves between the location of the user (e.g., P 1 ) and the location of the object of interest (P 0 ). The orientation of the arrow follows a vector that goes from the location of the user to P 0. If the user cannot take a straight path to the object for any reason, such as the presence of obstacles, he/she can take a different way without losing the object because the arrow will be automatically updated to go from his/her new location to the object. P 0 P 1 P 2 P 3 Figure 3: Animated arrow pointing to the objects of interest N Defect P P Defect P P Viewpoint Picking Device Position Pick Ray 3D Model Screen (a) 3D sketch (b) Side view Figure 4: Example of picking the model for marking defect 4.2 Picking Behavior Interaction with the 3D model is mainly facilitated by picking the elements of the model. In order to interactively retrieve or update information related to the picked element, it is important to know the location and the orientation of that element in the 3D environment. A pick shape is selected as the picking tool, e.g., a ray, segment, cone, or cylinder. The pick shape extends from the viewer s eye location, through the picking device location and into the virtual world. When a pick is requested, pickable shapes that intersect with the pick shape are computed. The pick returns a list of objects, from which the nearest object has to be found. After the closest object (O) is found, the surface (F) that faces the user should be identified to display a suitable feedback. Through the calculation of the distance between the picking device position and the intersection points, the nearest intersection point (P) can be found as well as the geometry of the face (F) that contains (P). The normal vector (N) on surface (F) can be calculated based on the current coordinates. The normal vector is used to represent the orientation of that face. 4

Based on P and N, the shape representing the feedback can be created and inserted in the scene graph at point P with an offset distance from the surface (F) proportional to the size of the shape. The vector representing point P can be found using the following equation: [1] P r ' = P r + offset N r The following example of the visual feedback based on picking is given to illustrate the method of calculation (Figure 4). In the case of inspection, the system allows the user to directly add a defect, which is represented by a 3D shape, on the surface of the inspected element. The location of the defect is represented by the point (P) of the picking. To show the defect on the surface, the center point of the 3D shape of that defect should be moved in the direction of the normal vector on that surface (N) with a small offset distance based on the size of the shape. 4.3 Levels of Details (LoDs) The basic idea of LoDs is to use simpler versions of an object to meet different precision needs and improve the image rendering performance. When the viewer is far from the object, a simplified model can be used to speed up the rendering. Due to the distance, the simplified version looks approximately the same as the more detailed version. Center of campus Viewpoint LoDs: Representations: LoD 1 d > 5 Nothing shown LoD 2 5 d > 4 Blocks LoD 3 4 d >3 Blocks with simple patterns LoD 4 3 d > 2 Blocks with low quality images LoD 5 2 d >1 Blocks with high quality images and objects d: Distance between center of campus and viewpoint (km) Figure 5: Relationship between distance and LoDs for the campus The LoDs algorithm consists of three major parts: generation, selection, and switching. Generation is generating different representations of a model with different detail. Selection is choosing a model based on certain ranges for the distance. Switching is changing from one representation to another. When the user moves, this event is detected and the distance between the user and the object is calculated. Based on this distance, the corresponding switch will be selected and the model that should be displayed in this range is rendered. As shown in Figure 5, LoDs are for the whole campus model, which includes five different cases: nothing shown, blocks, blocks texture-mapped with simple patterns, blocks texture-mapped with low quality images and blocks texture-mapped with high quality images and objects around building such as traffic lights, fire hydrant and street furniture. The distance (d) is measured between the viewpoint and the center of the campus. In this figure, the visible range is 5 km. 5. Prototype Development To demonstrate the feasibility and usefulness of the proposed approach, a prototype Facilities Management Information System (FMIS) is developed. This prototype is part of a more comprehensive system for infrastructure management (Hu and Hammad, 2005; Zhang and Hammad, 2005). In order to allow information sharing on the Internet, Java programming language is used. Java 3D is used to implement the 3D graphics of the system (Walsh and Gehinger, 2001). The system integrates a 3D model with object-relational database, GIS and 5

tracking components to develop a FMIS that can be used on site for retrieving and updating information of a certain element by directly interacting with the 3D model. The system is designed to provide a user-friendly interface that can be used in mobile situations with access control to FMIS databases including inspection and maintenance records. Users can query the database through the GUI or by picking a specific element, and get the results as visual feedback in the 3D model. Users can easily navigate in the 3D space using the navigation tools provided in the system. Two tracking technologies are investigated: RTK-GPS for exterior applications using Trimble 5700 receiver, and video tracking using ARToolKit for interior applications (jartookit, 2004). In the case of video tracking, when the user launches the tracking option of the system, it checks for the available video camera and loads its parameters. Then, the system starts searching for markers that are visible through the camera. Once a marker is detected, the user location in the virtual world is updated and information about the object associated with the marker is loaded and displayed in the 3D scene as augmentation to the scene. Two types of web cameras were satisfactorily tested with ARToolKit: Logitech QuickCam and IO-Data USB CCD Camera. Two types of Head-Mounted Displays (HMDs) were tested: Microvision Nomad ND2000 (Microvision, 2005) and MicroOptical SV-6 (MicroOptical, 2005). The Nomad HMD has a rugged, monochrome red display (32 gray levels) readable in all lighting conditions with automatic brightness adjustment. MicroOptical BV-3 is smaller, less rugged and has color display. Both displays support resolution of 800x600 pixels. After testing these displays under different conditions, it was found that the MicroOptical BV-3 is more suitable for LBC-Infra because of its overall superior visibility using colors. 6. Case Study 1: Non-Immersive Virtual Reality Concordia University downtown campus is used in the case studies to validate the proposed approach. The 3D virtual model of the university campus was developed using the following data: (1) 2D CAD drawings of the buildings obtained from the FM department of the university; (2) A digital map of the city of Montreal obtained from the municipality of Montreal; (3) Digital Elevation Model (DEM) of the city obtained from USGS website; (4) Small VRML object library developed by the authors for objects to be embedded in the 3D model, such as traffic lights, fire hydrants and street furniture; and (5) Orthogonal digital images of the facades of the buildings collected using a digital camera. The digital map and the DEM data of Montreal were acquired to generate 2D and 3D maps using the Modified Transverse Mercator (MTM) projection. Eurostep Toolbox (Eurostep Group, 2004) was used for accessing IFC files that have been created using ArchiCAD and Timberline software. As a basic example of a user interface for collecting data, an inspection routine of building envelopes was linked to the 3D model (Figure 6). The 3D Model can also link to all necessary specifications, drawings, procedures and inspection and maintenance records, so that the users can access the information needed for a specific task from the model through a customized user interface. At this stage, the 3D model contains only the structural elements of the buildings. Future work will consider adding other information related to FM, such as mechanical/electrical equipment, emergency evacuation plans, etc. This can help the users to easily understand the entire facilities configuration and access the related information. The GIS layers, images and 3D objects described above were combined together and translated to Virtual Reality Modeling Language (VRML). The translator application developed in Visual Basic uses a GIS library for extruding the GIS shapefiles and crating a number of VRML files that constitute the virtual 3D model loaded into Java 3D scene graph of the prototype FMIS. It should be noted that the entire downtown campus area should be modeled to 6

have a complete model of the existing building stock; however, at this stage, only the buildings of the university are added to the model (22 buildings). 3D Browser Tree Structure of Facilities 2D GIS Location Figure 6: Screenshot of the user interface of the developed prototype Marker Workspace Column N Column ID (a) Picture of the floor and the column (b) Graphical augmentation Video camera HMD (c) Augmentation of the column as seen by the superintendent (d) Construction superintendent equipped with AR devices Figure 7: Example of AR application in building construction 7. Case Study 2: Augmented Reality The model developed above is used as a test bed for indoor and outdoor mobile AR using video and GPS tracking, respectively. Because of space limitation in this paper, only the indoor case is discussed. Figure 7 shows an example of the results of the AR application used in a building construction project in the campus. Figure 7(a) shows a column on the fifth floor with a marker attached to it. The marker has an edge length of 20 cm. Figure 7(b) shows the graphical augmentation with the virtual workspace associated with the column. Figure 7(c) simulates the view that the user sees when the real structure of the column is augmented with the virtual workspace and the ID of the column. Figure 7(d) shows a construction superintendent equipped with the AR devices where he is checking the workspace by viewing it through the HMD. At this stage, the prototype system successfully integrates the databases of the 3D model and inspection tasks and allows the user to interact with the 3D model to identify the elements of the 7

structure or to retrieve and update the related attributes in the database. User position and orientation are tracked and used to update the 3D view. Furthermore, the tracking information and all interactions with the system can be logged to record the activities of the inspector in the field. This will eventually allow the system to automatically generate a multimedia report of inspection activities and to replay the report in a way similar to replaying a digital video. In the partial testing of the prototype system using the above hardware and software, several problems were identified, e.g., the field of view is too narrow, reading the HMD in the outdoors is rather difficult, etc. Further investigation is needed to identify and test the optimal hardware architecture suitable for the prototype system. 8. Conclusions In this paper, we discussed some aspects of the VR and AR interaction models of LBC-Infra. LBC-Infra facilitates accessing and collecting inspection data by allowing field workers to interact with geo-referenced infrastructure models and automatically retrieve the necessary information in real time based on their location and orientation, and the task context. The framework of LBC-Infra integrates mobile and wearable computers, 3D GIS/CAD databases, positioning technologies and wireless communications to provide field workers with the specific information they need in a proactive manner. The prototype, using video-based or GPS tracking, demonstrated the basic functionalities of LBC-Infra, such as tracking, navigation, and interacting with the 3D model to identify elements of the structure or to retrieve and update the related attributes in the database. This is expected to improve the efficiency and safety of the field workers by allowing them to concentrate on their job. Future work will focus on further development and testing of the prototype system, considering both hardware and software issues, and on investigating the usage of LBC-Infra as a base for collaborative environment for field workers. REFERENCES Barrilleaux, J. (2001). 3D User Interface with Java3D, Manning Publications, pp. 41-87. Eurostep Group (2001). <http://www.eurostep.com/> (accessed December, 2004). Fischer, M. (2001). Introduction to 4D Research, <http://www.stanford.edu/group/4d>. Hammad, A., Sugihara, K. and Hayashi, Y. (1999). Using 3D-GIS and the Internet for Facilitating Public Involvement in Urban Landscape Evaluation, Journal of Civil Engineering Information Processing System, JSCE, Vol.8, pp.215-222. Hammad, A., Zhang, C., Hu, Y. and Mozaffari, E. (2004). Mobile Model-Based Bridge Lifecycle Management Systems, Proc. Conference on Construction Application of Virtual Reality, ADETTI/ISCTE, Lisbon, pp. 109-120. Hu, Y., and Hammad, A. (2005). Location Based Mobile Bridge Inspection Support System. 1 st CSCE Conference on Infrastructure Technologies, Management and Policy, Toronto. jartookit - Java Binding for ARToolKit (2004). <http://www.c-lab.de/jartoolkit>. MicroOptical web site (2005). < http://www.microopticalcorp.com/> Milgram, P. And Kishino, F. (1994). A taxonomy of mixed reality visual displays, IEICE Transactions on Information Systems, Vol E77-D. Nomad Display Systems (2005). Microvision web site. <http://www.microvision.com>. Walsh, A. and Gehinger, D. (2001). Java 3D API Jump-Start, Prentice Hall PTR, 1st edition. Zlatanova, S., Rahman, A., and Pilouk, M. (2002). 3D GIS: Current Status and Perspectives. Proc., ISPRS Technical Commission IV Symposium, Ottawa, Canada, 2002. Zhang, C. and Hammad, A. (2005). Spatio-Temporal Issues in Infrastructure Lifecycle Management Systems, 1 s CSCE Conference on Infrastructure Technologies, Management and Policy, Toronto. 8