User Interface for Volume Rendering in Virtual Reality Environments
|
|
- Caroline Byrd
- 5 years ago
- Views:
Transcription
1 Computer Graphics and Multimedia Systems User Interface for Volume Rendering in Virtual Reality Environments Jonathan Klein, Dennis Reuling, Jan Grimm, Andreas Pfau, Damien Lefloch, Martin Lambers, Andreas Kolb Version: February 8, 2013 Faculty of Science and Technology Institute for Vision and Graphics (IVG) Computer Graphics and Multimedia Systems Group Prof. Dr. Andreas Kolb
2 1. Introduction Abstract Volume Rendering applications require sophisticated user interaction for the definition and refinement of transfer functions. Traditional 2D desktop user interface elements have been developed to solve this task, but such concepts do not map well to the interaction devices available in Virtual Reality environments. In this paper, we propose an intuitive user interface for Volume Rendering specifically designed for Virtual Reality environments. The proposed interface allows transfer function design and refinement based on intuitive two-handed operation of Wand-like controllers. Additional interaction modes such as navigation and clip plane manipulation are supported as well. The system is implemented using the Sony PlayStation Move controller system. This choice is based on controller device capabilities as well as application and environment constraints. Initial results document the potential of our approach. 1. Introduction Volume Rendering visualizes 3D grids of voxels. Each voxel typically stores a scalar value representing density, as retrieved via a 3D scanning technique such as CT or MRI. Direct Volume Rendering techniques such as volume ray casting work directly on the voxel data instead of extracted geometry such as isosurfaces. Such techniques use a transfer function to map voxel values to opacity and color. The volume ray caster then generates a ray through the 3D grid for every pixel in the image plane, samples the voxel data along the ray, and composites the opacity and color information given by the transfer function to compute the final pixel color. A basic transfer function is a one-dimensional function that directly maps a scalar voxel value to opacity and color. Volume Rendering applications require user interface concepts that allow efficient and precise design and refinement of such transfer functions, to enable the user to visualize the interesting parts of the volume data set. In the traditional 2D graphical user interface domain of desktop systems, this problem is solved using 2D widgets that typically allow mouse-based manipulation of the functions [3]. This paper focuses on one-dimensional transfer functions, but note that advanced two-dimensional transfer functions models exist that take the gradient or the curvature at the voxel location into account and require even more complex user interfaces. Since Virtual Environments are especially well suited to explore spatial properties of complex 3D data, bringing Volume Rendering applications into such environments is a natural step. However, defining new user interfaces suitable both for the Virtual Environment and for the Volume Rendering application is difficult. Previous approaches mainly focused on porting traditional 2D point-and-click concepts to the Virtual Environment [8, 5, 9]. This tends to be unintuitive, to complicate the interaction, and to make only limited use of available interaction devices. 2
3 2. Related Work In this paper, we propose an intuitive 3D user interface for Volume Rendering based on interaction devices that are suitable for Virtual Reality environments. We focus on a simplified approach to design and refine transfer functions that allows intuitive use of interaction devices, specifically the Sony PlayStation Move controller system. Our demonstration system also supports other Volume Rendering interaction modes such as navigation and clip plane manipulation. The remainder of this paper is organized as follows. Sec. 2 discusses related work. In Sec. 3, we describe our user interface concepts in detail, and present its implementation based on Sony PlayStation Move controllers in a Virtual Reality Lab. Initial results are shown in Sec. 4. Sec. 5 concludes this paper. 2. Related Work One of the first applications of Volume Rendering in a Virtual Reality environment was presented by Brady et al. in 1995 [1]. This early work concentrated on navigation using a Wand-like device. In 2000, Wohlfahrter et al. presented a two-handed interaction system with support for navigating, dragging, scaling, and cutting volume data in a Virtual Reality environment [12]. Neither of these early approaches supported transfer function manipulation. One of the first works on transfer function editing for Volume Rendering in Virtual Reality environments was presented by Schulze-Döbold et al. in 2001 [8]. Their transfer function editor requires a 6 DOF controller with three buttons. The controller is primarily used to point at an interaction element to select it for manipulation. To control scalar values, the editor uses virtual knobs that are manipulated by twisting the hand. The three buttons are used to manipulate position and size of the 2D transfer function editor inside the 3D environment. This interface is directly based on the 2D desktop point-an-click interface. Consequently, the authors refer to the controller as a 3D mouse. Schulze-Döbold later refined the user interface based on feedback collected in a user study [7], but the principal design remained unchanged. Wössner et al. reuse Schulze-Döbold s work for the purpose of collaborative volume exploration in distributed setups [13]. Kniss et al. split the task of defining multidimensional transfer functions into a classification step and an exploration step [5]. The classification step, which defines the transfer function, is performed prior to visualization on a classical 2D desktop system using the mouse. The Virtual Reality interface is based on Schulze-Döbold s work. Later works also mainly use variations of this approach of bringing 2D point-and-click interfaces to 3D environments [4, 9]. An exception is the work of Tawara and Ono from 2010, in which they combined a Wiimote and a motion tracking cube to get a tracked manipulation device for a volume data application [11]. However, their approach focuses on volume segmentation in augmented reality; in particular, it does not support transfer function manipulation. 3
4 3. 3D User Interface for Volume Rendering 2,30 m r = 2,45 m 2,75 m ca. 1,8 m Side View ca. 0,8 m Top View Figure 1: Overview of the Virtual Reality environment 3. 3D User Interface for Volume Rendering A user interface for Volume Rendering applications must support two key interaction modes: Navigation. This allows to inspect the volume from various perspectives by applying translations and rotations. A Virtual Reality environment with user tracking additionally allows the user to move around the volume. Transfer function manipulation. A transfer function allows to visualize the interesting parts of the volume (by assigning color and opacity to the interesting voxel value ranges) and at the same time remove other parts of the volume from view (by mapping the corresponding voxel values to zero opacity). In addition to these modes, Volume Rendering applications usually provide supplemental tools such as clip planes. In this paper, we focus on the transfer function manipulation mode of the user interface Choice of Input Devices Our choice of input devices, interface concepts, and implementation details was based on the interaction requirements given by the Volume Rendering application, with special emphasis on transfer function manipulation, and on the specific constraints given by the Virtual Reality environment. The Virtual Reality environment which we used in this project has an open cylindrical screen and a floor screen, providing a wide field of view. See Fig. 1 and 2. The cylindrical screen has four rear-projection stereo channels and the floor screen has two frontprojection stereo channels. Additionally, the environment provides optical user tracking. 4
5 3. 3D User Interface for Volume Rendering Figure 2: Interactive Volume Rendering in the Virtual Reality environment We considered interaction devices that are based on readily available and affordable hardware components and provide enough input facilities for both navigation and transfer function editing. Devices that fulfill these criteria include the Microsoft Kinect, the Nintendo Wiimote, and the Sony PlayStation Move. Traditional game controllers as well as the Flystick provided by our optical tracking system were excluded since they do not provide enough input facilities. Microsoft Kinect uses an optical tracking system to follow the movement of the user, allowing full-body gesture interaction. Unfortunately the Kinect system requires the camera to be placed directly in front of the user, which was not possible in our environment. We experimented with a Kinect camera placed at the top of the screen and looking at the user with an angle of approximately 45 degrees, but this setup leads to unusable tracking data. The concepts of the Wiimote and the Move are similar. In both systems the user holds a wand-like controller in his hand that can measure its movement and orientation. The Move system additionally uses an optical tracking system to determine the absolute position of the controller, and can therefore provide position and orientation data with higher precision and reliability. In contrast to the Kinect system, the Move system works fine with the camera mounted on top of the screen, as shown in Fig. 1. Furthermore, the Move controller has a glowing bulb on top whose color can be changed. This gives interesting opportunities for user feedback (see Sec ). Recently, Takala et al. identified a lack of user interface concepts based on the PlayStation Move system, partly due to the locked-up nature of the SDK [10]. That situation has changed: the SDK is now freely available for educational and research purposes. Additionally, prices for the hardware part of system dropped significantly. For these reasons, we chose to base our experiments on the PlayStation Move system. 5
6 3. 3D User Interface for Volume Rendering Figure 3: The Sony PlayStation Move Nav-Pad (left) and controller (right). Both devices provide digital buttons (green) and an analogue trigger (blue). The Nav-Pad provides an additional analogue stick (red), while the controller has a bulb (orange) on its top that can glow in different colors Interface Concept To allow both navigation and transfer function manipulation and to have enough input facilities for the various functionalities required by these two modes, we decided to use both the Move controller (for the right hand) and an additional Move Nav-Pad (for the left hand). See Fig. 3. The Move controller, whose bulb is tracked by the camera of the system, is used for manipulation (translation and rotation in navigation mode, and modification in transfer function editing mode), while the Move Nav-Pad is used for selection and switching purposes. This configuration is classified by Schultheis et al. [6] as an asymmetric two-handed interface using two Wand-like input devices Transfer Function Editing Small changes to a transfer function can result in significant changes in the visualization result, and usually more than one range of voxel values represents interesting parts of the volume data. For this reason, typical Volume Rendering applications allow to specify the transfer function using piecewise linear building blocks. However, this requires a very fine-grained control, typically using a Mouse-based interface. In order to allow more intuitive manipulation of transfer functions using hand-held devices that typically favor coarser control movements, we use a different transfer function model. In our model, a transfer function is defined as the sum of window functions, called peaks. Each peak highlights a small voxel value range in a specific color. 6
7 3. 3D User Interface for Volume Rendering A peak p is defined by its center c, width w, and height h: { h sin ( π p(x) = 2w (x c + w) ) c w x c + w 0 otherwise (1) (Alternatively, different window functions could be used). The result of a transfer function t for a voxel value x is then defined as the result of alpha-blending the peak colors using the peak value p(x) as the opacity value. In practical use, only a small number n < 8 of peaks is required, since more peaks tend to clutter up the visualization result. An example is given in Fig. 4. This transfer function model significantly reduces the number of parameters that a user has to modify, while still allowing flexible and powerful transfer functions. The user can add a peak to a transfer function and select its color from a list of predefined colors using digital buttons on the Move Nav-Pad (see Fig. 3). Similarly, peaks can be deleted, temporarily disabled, or selected for parameter adjustment using additional Nav-Pad buttons. To change the center c, width w, and height h of a peak, the Move controller is used. We tried different combinations of mapping these three peak parameters to the x-, y-, and z-axes of the Move controller. Using the z-axis proved to be unintuitive and therefore difficult to control. Our current solution is that the user has to choose (using buttons on the Move controller) to adjust either c and h or w. This has the advantage that both adjustments take place in the x/y-plane. The reason for separating w from the other parameters was that the visualization result is especially sensitive to changes of peak widths, so that users tend to first define position and height of a peak and then fine-tune the result by adjusting its width. To provide the user with visual feedback about the current transfer function properties and the selection state, we display an overview widget at a fixed position in the Virtual Environment, as shown in Fig. 4. Note that this widget is for informational purposes only and does not require traditional point-and-click functionality. As an additional aid, we set the color of the glowing Move controller bulb to the color of the transfer function that is currently selected. Experienced users can use this feedback to determine the current state of transfer function manipulation without looking at the overview widget Navigation Navigation is implemented by performing translation and rotation using the tracked Move controller. Translation is active while the largest digital button of the Move controller is pressed, while rotation is active while the analogue trigger of the Move controller is pressed. See Fig. 3. Translation works by directly applying Move controller position changes to the volume. Rotation works by mapping horizontal controller movements to volume rotations around the y-axis, vertical movements to volume rotations around the x-axis, and controller rotations around the z-axis to volume rotations around the z-axis. 7
8 3. 3D User Interface for Volume Rendering Figure 4: Transfer function defined by n = 5 peaks. Each peak assigns color and opacity information to a voxel value range. The histogram of voxel values is displayed in white in the background. An obvious alternative would be to directly apply both Move controller position and orientation changes to the volume while in navigation mode, but separating translation and orientation in the way described above allows a more fine-grained control of movement, which is useful to examine smaller volume areas in detail. Furthermore, mapping controller movements to rotations instead of directly using the controller orientation avoids uncomfortable wrist positions. Requiring a button to be pressed for navigation mode allows the user to move freely in the Virtual Reality environment without unintentionally moving the volume Implementation The physical setup is described by Fig. 1. Our software implementation is based on the Equalizer framework for parallel and distributed rendering [2]. This allows the application to run across the six nodes of our render cluster, each of which renders one of the stereo channels using two graphics cards. We used the official Move.Me SDK from Sony for connecting the PlayStation Move system to Equalizer. The controller sends its sensor data via Bluetooth to the PlayStation console, on which the Move.Me SDK runs as a server application. Our application acts as a client to this server and receives position, orientation, and button state data via network UDP packets. This data is transformed to custom input events and handed over to the Equalizer event handling mechanisms to allow consistent input event handling. The GPU-based volume ray caster is based on a ray casting shader implementation provided by the Visualization Library project 1. The ray caster is straightforward but proved sufficient for our purposes while being fast enough for interactive use in a Virtual Reality environment
9 4. Initial Results Figure 5: Predefined result Figure 6: Result of the ex- Figure 7: Result of the inexperienced user perienced user 4. Initial Results In the figures throughout this paper, we used the Baby Head data set available from the volume library of Stefan Roettger2. As a first test of our concept, we performed an experiment involving one test user with experience in both Virtual Reality and Volume Rendering applications and another test user with no experiences in these domains. The task for these two test users was to reproduce a visualization result for the Baby Head data set, starting from a default transfer function. The predefined visualization result was produced with a transfer function consisting of three peaks that separate the skeleton (green), the teeth (red), and the skin (blue). See Fig. 5. The experienced test user was able to solve the task in less than half a minute with good results, shown in Fig. 6. After an introduction to controller usage and button configuration, the inexperienced user was able to achieve a satisfying result, shown in Fig. 7, in approximately one minute. In this and similar experiments with several test data sets (only one of which is shown here), the simplified transfer function model was powerful enough and the transfer function manipulation was precise enough to highlight interesting aspects the data. More demanding data sets might require more fine-grained control, which may require changes to the transfer function model and/or adjustments to the interface sensitivity. The two-handed interface requires some coordination between both hands which inexperienced users are unaccustomed to. However, after some practice, this does not seem 2 9
10 5. Conclusion pose a problem. Another challenge especially for inexperienced users is to remember the mapping between controller movements and buttons to the various interaction functionalities of the application. We plan to integrate an online help function that displays images of the controllers on screen, along with schematic descriptions of their usage, in order to avoid the need for leaving the Virtual Reality environment to look up user interface documentation. Despite these shortcomings, we believe the approach has the potential to be usable in real-world Volume Rendering applications. 5. Conclusion We propose a 3D user interface concept for Volume Rendering in Virtual Environments. Unlike previous approaches, this concept does not try to map traditional 2D point-andclick concepts to the Virtual Environment; instead, it is based on a set of intuitive user actions using the Sony PlayStation Move controller system. For this purpose, a simplified transfer function model was designed that allows a reduction of interaction complexity. This comes at the cost of reduced flexibility and precision when compared to a traditional 2D desktop interface, but we believe that our system is still flexible and precise enough for exploration of most volume data sets, while allowing faster and more intuitive manipulation of transfer functions. In the future, we would like to refine the interface based on user feedback. Furthermore, it would be interesting to explore the possibility to extent our approach to two-dimensional transfer functions. References 1. R. Brady, J. Pixton, G. Baxter, P. Moran, C. Potter, B. Carragher, and A. Belmont. Crumbs: a virtual environment tracking tool for biological imaging. In Proc. Biomedical Vis., pages 18 25, 82, Oct S. Eilemann. Equalizer: A scalable parallel rendering framework. IEEE Trans. Visualization and Computer Graphics, 15(3): , May K. Engel, M. Hadwiger, J. Kniss, C. Rezk-Salama, and D. Weiskopf. Real-Time Volume Graphics. AK Peters, C. He, A. Lewis, and J. Jo. A novel human computer interaction paradigm for volume visualization in projection-based virtual environments. In Proc. Int. Symp. Smart Graphics, pages 49 60, J. Kniss, J. Schulze, U. Wössner, P. Winkler, U. Lang, and C. Hansen. Medical applications of multi-field volume rendering and VR techniques. In Proc. IEEE VGTC Symp. Vis., pages ,
11 5. Conclusion 6. U. Schultheis, J. Jerald, F. Toledo, A. Yoganandan, and P. Mlyniec. Comparison of a two-handed interface to a wand interface and a mouse interface for fundamental 3D tasks. In IEEE Symp. 3D User Interfaces (3DUI), pages , Mar J. Schulze-Döbold. Interactive Volume Rendering in Virtual Environments. PhD thesis, University of Stuttgart, Aug J. Schulze-Döbold, U. Wössner, S. Walz, and U. Lang. Volume rendering in a virtual environment. In Proc. Immersive Projection Technology Workshop and Eurographics Workshop on Virtual Environments, pages , R. Shen, P. Boulanger, and M. Noga. MedVis: A real-time immersive visualization environment for the exploration of medical volumetric data. In Proc. Biomedical Vis., pages 63 68, July T. Takala, P. Rauhamaa, and T. Takala. Survey of 3DUI applications and development challenges. In IEEE Symp. 3D User Interfaces (3DUI), pages 89 96, Mar T. Tawara and K. Ono. A framework for volume segmentation and visualization using augmented reality. In IEEE Symp. 3D User Interfaces (3DUI), pages , Mar W. Wohlfahrter, L. Encarnação, and D. Schmalstieg. Interactive volume exploration on the studydesk. In Proc. Int. Projection Technology Workshop, June U. Wössner, J. P. Schulze, S. P. Walz, and U. Lang. Evaluation of a collaborative volume rendering application in a distributed virtual environment. In Proc. Eurographics Workshop on Virtual Environments (EGVE), page 113ff, May
Scalar Data. Visualization Torsten Möller. Weiskopf/Machiraju/Möller
Scalar Data Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification Segmentation
More informationScalar Data. CMPT 467/767 Visualization Torsten Möller. Weiskopf/Machiraju/Möller
Scalar Data CMPT 467/767 Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Basic strategies Function plots and height fields Isolines Color coding Volume visualization (overview) Classification
More informationMirrored LH Histograms for the Visualization of Material Boundaries
Mirrored LH Histograms for the Visualization of Material Boundaries Petr Šereda 1, Anna Vilanova 1 and Frans A. Gerritsen 1,2 1 Department of Biomedical Engineering, Technische Universiteit Eindhoven,
More informationClipping. CSC 7443: Scientific Information Visualization
Clipping Clipping to See Inside Obscuring critical information contained in a volume data Contour displays show only exterior visible surfaces Isosurfaces can hide other isosurfaces Other displays can
More informationVolume Rendering - Introduction. Markus Hadwiger Visual Computing Center King Abdullah University of Science and Technology
Volume Rendering - Introduction Markus Hadwiger Visual Computing Center King Abdullah University of Science and Technology Volume Visualization 2D visualization: slice images (or multi-planar reformation:
More informationInteractive Boundary Detection for Automatic Definition of 2D Opacity Transfer Function
Interactive Boundary Detection for Automatic Definition of 2D Opacity Transfer Function Martin Rauberger, Heinrich Martin Overhoff Medical Engineering Laboratory, University of Applied Sciences Gelsenkirchen,
More informationAugmenting Reality with Projected Interactive Displays
Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection
More informationVolume Rendering in a Virtual Environment
Volume Rendering in a Virtual Environment Jürgen Schulze-Döbold, Uwe Wössner, Steffen P. Walz, Ulrich Lang High Performance Computing Center (RUS/HLRS), University of Stuttgart, Germany {schulze,woessner,walz,lang}@hlrs.de
More informationNon-Destructive Failure Analysis and Measurement for Molded Devices and Complex Assemblies with X-ray CT and 3D Image Processing Techniques
SINCE2013 Singapore International NDT Conference & Exhibition 2013, 19-20 July 2013 Non-Destructive Failure Analysis and Measurement for Molded Devices and Complex Assemblies with X-ray CT and 3D Image
More informationBIM Navigation with Hand-Based Gesture Control on Sites. Chao-Chung Yang 1 and Shih-Chung Kang 2
785 BIM Navigation with Hand-Based Gesture Control on Sites Chao-Chung Yang 1 and Shih-Chung Kang 2 1 Department of Civil Engineering, National Taiwan University, Rm. 611, No.188, Sec. 3, Xinhai Rd., Da
More informationMulti-Touch Gestures for 3D Objects
Whitepaper Multi-Touch Gestures for 3D Objects www.infoteam.de Multi-Touch Gestures for 3D Objects Concepts for intuitively moving 3D Objects using Gestures The widespread acceptance of smartphones and
More informationThe Five Rooms Project
The Five Rooms Project The Assignment If an artist is given the task of graphically designing a surface, then he is also left to decide which creative processes will be active and which criteria will then
More informationAdvanced Graphics: NOMAD Summer. Interactive analysis and visualization of complex datasets
NOMAD Summer A hands-on course on tools for novel-materials discovery September 25-29, 2017, Berlin Advanced Graphics: Interactive analysis and visualization of complex datasets Michele Compostella Markus
More informationReal-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images
Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily
More informationVolume Rendering. Lecture 21
Volume Rendering Lecture 21 Acknowledgements These slides are collected from many sources. A particularly valuable source is the IEEE Visualization conference tutorials. Sources from: Roger Crawfis, Klaus
More informationcoding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight
Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image
More informationData Visualization (DSC 530/CIS )
Data Visualization (DSC 530/CIS 60-01) Scalar Visualization Dr. David Koop Online JavaScript Resources http://learnjsdata.com/ Good coverage of data wrangling using JavaScript Fields in Visualization Scalar
More informationAUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER
AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal
More informationVisualization of the Marked Cells of Model Organism
Visualization of the Marked Cells of Model Organism Radek Kubíček Department of Computer Graphics and Multimedia Brno University of Technology Brno / Czech Republic Abstract We will present the current
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationImageVis3D "Hands On"-Session
ImageVis3D "Hands On"-Session Center for Integrative Biomedical Computing 2009 Workshop, Northeastern University 1 1. The current state of ImageVis3D Remember : 1. If you find any problems in ImageVis3D,
More informationVolume Ray Casting Neslisah Torosdagli
Volume Ray Casting Neslisah Torosdagli Overview Light Transfer Optical Models Math behind Direct Volume Ray Casting Demonstration Transfer Functions Details of our Application References What is Volume
More informationCMP 477 Computer Graphics Module 2: Graphics Systems Output and Input Devices. Dr. S.A. Arekete Redeemer s University, Ede
CMP 477 Computer Graphics Module 2: Graphics Systems Output and Input Devices Dr. S.A. Arekete Redeemer s University, Ede Introduction The widespread recognition of the power and utility of computer graphics
More informationSelective Space Structures Manual
Selective Space Structures Manual February 2017 CONTENTS 1 Contents 1 Overview and Concept 4 1.1 General Concept........................... 4 1.2 Modules................................ 6 2 The 3S Generator
More informationData Visualization (DSC 530/CIS )
Data Visualization (DSC 530/CIS 60-0) Isosurfaces & Volume Rendering Dr. David Koop Fields & Grids Fields: - Values come from a continuous domain, infinitely many values - Sampled at certain positions
More informationStatic Gesture Recognition with Restricted Boltzmann Machines
Static Gesture Recognition with Restricted Boltzmann Machines Peter O Donovan Department of Computer Science, University of Toronto 6 Kings College Rd, M5S 3G4, Canada odonovan@dgp.toronto.edu Abstract
More informationInventions on Three Dimensional GUI- A TRIZ based analysis
From the SelectedWorks of Umakant Mishra October, 2008 Inventions on Three Dimensional GUI- A TRIZ based analysis Umakant Mishra Available at: https://works.bepress.com/umakant_mishra/74/ Inventions on
More informationMouse Pointer Tracking with Eyes
Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating
More informationCleaver Lab Walkthrough
Cleaver Lab Walkthrough Cleaver 2.0 Beta Documentation Center for Integrative Biomedical Computing Scientific Computing & Imaging Institute University of Utah Cleaver software download: http://software.sci.utah.edu
More informationVolume Graphics Introduction
High-Quality Volume Graphics on Consumer PC Hardware Volume Graphics Introduction Joe Kniss Gordon Kindlmann Markus Hadwiger Christof Rezk-Salama Rüdiger Westermann Motivation (1) Motivation (2) Scientific
More informationJoe Michael Kniss December 2005
Joe Michael Kniss December 2005 Research Interests 50 S. Central Campus Dr. MEB 3490 Salt Lake City, UT, 84112 jmk@sci.utah.edu Phone: 801-898-7977 http://www.cs.utah.edu/ jmk I am currently conducting
More informationAn Interactive Technique for Robot Control by Using Image Processing Method
An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.
More informationDigital Volume Correlation for Materials Characterization
19 th World Conference on Non-Destructive Testing 2016 Digital Volume Correlation for Materials Characterization Enrico QUINTANA, Phillip REU, Edward JIMENEZ, Kyle THOMPSON, Sharlotte KRAMER Sandia National
More informationDevelopment of real-time motion capture system for 3D on-line games linked with virtual character
Development of real-time motion capture system for 3D on-line games linked with virtual character Jong Hyeong Kim *a, Young Kee Ryu b, Hyung Suck Cho c a Seoul National Univ. of Tech., 172 Gongneung-dong,
More informationToward Spatial Queries for Spatial Surveillance Tasks
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Toward Spatial Queries for Spatial Surveillance Tasks Yuri A. Ivanov, Christopher R. Wren TR2006-051 May 2006 Abstract Surveillance systems
More informationDisplay. Introduction page 67 2D Images page 68. All Orientations page 69 Single Image page 70 3D Images page 71
Display Introduction page 67 2D Images page 68 All Orientations page 69 Single Image page 70 3D Images page 71 Intersecting Sections page 71 Cube Sections page 72 Render page 73 1. Tissue Maps page 77
More informationBlender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 1 The Blender Interface and Basic Shapes
Blender Notes Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 1 The Blender Interface and Basic Shapes Introduction Blender is a powerful modeling, animation and rendering
More informationVolume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization
Volume visualization Volume visualization Volumes are special cases of scalar data: regular 3D grids of scalars, typically interpreted as density values. Each data value is assumed to describe a cubic
More informationCIS 467/602-01: Data Visualization
CIS 467/60-01: Data Visualization Isosurfacing and Volume Rendering Dr. David Koop Fields and Grids Fields: values come from a continuous domain, infinitely many values - Sampled at certain positions to
More informationINDUSTRIAL SYSTEM DEVELOPMENT FOR VOLUMETRIC INTEGRITY
INDUSTRIAL SYSTEM DEVELOPMENT FOR VOLUMETRIC INTEGRITY VERIFICATION AND ANALYSIS M. L. Hsiao and J. W. Eberhard CR&D General Electric Company Schenectady, NY 12301 J. B. Ross Aircraft Engine - QTC General
More informationTask analysis based on observing hands and objects by vision
Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationGetting Started. What is SAS/SPECTRAVIEW Software? CHAPTER 1
3 CHAPTER 1 Getting Started What is SAS/SPECTRAVIEW Software? 3 Using SAS/SPECTRAVIEW Software 5 Data Set Requirements 5 How the Software Displays Data 6 Spatial Data 6 Non-Spatial Data 7 Summary of Software
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationBuilding Reliable 2D Maps from 3D Features
Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;
More informationChapter 10. Creating 3D Objects Delmar, Cengage Learning
Chapter 10 Creating 3D Objects 2011 Delmar, Cengage Learning Objectives Extrude objects Revolve objects Manipulate surface shading and lighting Map artwork to 3D objects Extrude Objects Extrude & Bevel
More informationA Study of Medical Image Analysis System
Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun
More informationShort Survey on Static Hand Gesture Recognition
Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of
More informationProjection simulator to support design development of spherical immersive display
Projection simulator to support design development of spherical immersive display Wataru Hashimoto, Yasuharu Mizutani, and Satoshi Nishiguchi Faculty of Information Sciences and Technology, Osaka Institute
More informationChapter 8 Visualization and Optimization
Chapter 8 Visualization and Optimization Recommended reference books: [1] Edited by R. S. Gallagher: Computer Visualization, Graphics Techniques for Scientific and Engineering Analysis by CRC, 1994 [2]
More informationThe Implementation of a Glove-Based User Interface
The Implementation of a Glove-Based User Interface Chris Carey January 26, 2010 Abstract Multi-touch interfaces have been rising in usage because of how their multiple points of input simplify the execution
More informationToward Spatial Queries for Spatial Surveillance Tasks
Toward Spatial Queries for Spatial Surveillance Tasks Yuri A. Ivanov and Christopher R. Wren Mitsubishi Electric Research Laboratories 201 Broadway 8th Floor; Cambridge MA USA 02139 email: {wren,ivanov}merl.com
More information3DView Installation and Operation
The following notes are intended as a guide to the installation and operation of the 3DView Volume Rendering Application. As this software is still under development, some of the features and screenshots
More informationReal-time Gesture Pattern Classification with IMU Data
Real-time Gesture Pattern Classification with IMU Data Alex Fu Stanford University Computer Science Department alexfu@stanford.edu Yangyang Yu Stanford University Electrical Engineering Department yyu10@stanford.edu
More informationSYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET
SYSTEM FOR ACTIVE VIDEO OBSERVATION OVER THE INTERNET Borut Batagelj, Peter Peer, Franc Solina University of Ljubljana Faculty of Computer and Information Science Computer Vision Laboratory Tržaška 25,
More informationMATHEMATICS 105 Plane Trigonometry
Chapter I THE TRIGONOMETRIC FUNCTIONS MATHEMATICS 105 Plane Trigonometry INTRODUCTION The word trigonometry literally means triangle measurement. It is concerned with the measurement of the parts, sides,
More informationSilhouette-based Multiple-View Camera Calibration
Silhouette-based Multiple-View Camera Calibration Prashant Ramanathan, Eckehard Steinbach, and Bernd Girod Information Systems Laboratory, Electrical Engineering Department, Stanford University Stanford,
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent
More informationDevelopment of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras
Proceedings of the 5th IIAE International Conference on Industrial Application Engineering 2017 Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Hui-Yuan Chan *, Ting-Hao
More informationJune 05, 2018, Version 3.0.6
June 05, 2018, Version 3.0.6 VolViCon is an advanced application for reconstruction of computed tomography (CT), magnetic resonance (MR), ultrasound, and x-rays images. It gives features for exporting
More informationccassembler 2.1 Getting Started
ccassembler 2.1 Getting Started Dated: 29/02/2012 www.cadclick.de - 1 - KiM GmbH 1 Basic Principles... 6 1.1 Installing anchor on anchor... 6 1.2 Modes and Actions... 6 1.3 Mouse control and direct input...
More informationScroll Display: Pointing Device for Palmtop Computers
Asia Pacific Computer Human Interaction 1998 (APCHI 98), Japan, July 15-17, 1998, pp. 243-248, IEEE Computer Society, ISBN 0-8186-8347-3/98 $10.00 (c) 1998 IEEE Scroll Display: Pointing Device for Palmtop
More informationACCURACY ANALYSIS FOR NEW CLOSE-RANGE PHOTOGRAMMETRIC SYSTEMS
ACCURACY ANALYSIS FOR NEW CLOSE-RANGE PHOTOGRAMMETRIC SYSTEMS Dr. Mahmoud El-Nokrashy O. ALI Prof. of Photogrammetry, Civil Eng. Al Azhar University, Cairo, Egypt m_ali@starnet.com.eg Dr. Mohamed Ashraf
More informationINTERACTIVE ENVIRONMENT FOR INTUITIVE UNDERSTANDING OF 4D DATA. M. Murata and S. Hashimoto Humanoid Robotics Institute, Waseda University, Japan
1 INTRODUCTION INTERACTIVE ENVIRONMENT FOR INTUITIVE UNDERSTANDING OF 4D DATA M. Murata and S. Hashimoto Humanoid Robotics Institute, Waseda University, Japan Abstract: We present a new virtual reality
More informationThe Application Research of 3D Simulation Modeling Technology in the Sports Teaching YANG Jun-wa 1, a
4th National Conference on Electrical, Electronics and Computer Engineering (NCEECE 2015) The Application Research of 3D Simulation Modeling Technology in the Sports Teaching YANG Jun-wa 1, a 1 Zhengde
More informationFirst Steps in Hardware Two-Level Volume Rendering
First Steps in Hardware Two-Level Volume Rendering Markus Hadwiger, Helwig Hauser Abstract We describe first steps toward implementing two-level volume rendering (abbreviated as 2lVR) on consumer PC graphics
More informationCross-Language Interfacing and Gesture Detection with Microsoft Kinect
Cross-Language Interfacing and Gesture Detection with Microsoft Kinect Phu Kieu Department of Computer Science and Mathematics, California State University, Los Angeles Los Angeles, CA 90032 Lucy Abramyan,
More informationVolume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics
Volume Illumination Visualisation Lecture 11 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Volume Illumination & Vector Vis. 1 Previously : Volume Rendering
More informationInteractive 3D Geometrical Modelers for Virtual Reality and Design. Mark Green*, Jiandong Liang**, and Chris Shaw*
Interactive 3D Geometrical Modelers for Virtual Reality and Design Mark Green*, Jiandong Liang**, and Chris Shaw* *Department of Computing Science, University of Alberta, Edmonton, Canada **Alberta Research
More informationAn Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering
An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering T. Ropinski, F. Steinicke, K. Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster
More informationCIS 467/602-01: Data Visualization
CIS 467/602-01: Data Visualization Vector Field Visualization Dr. David Koop Fields Tables Networks & Trees Fields Geometry Clusters, Sets, Lists Items Items (nodes) Grids Items Items Attributes Links
More informationIntroduction to 3D Graphics
Graphics Without Polygons Volume Rendering May 11, 2010 So Far Volumetric Rendering Techniques Misc. So Far Extended the Fixed Function Pipeline with a Programmable Pipeline Programming the pipeline is
More informationComputation of Three-Dimensional Electromagnetic Fields for an Augmented Reality Environment
Excerpt from the Proceedings of the COMSOL Conference 2008 Hannover Computation of Three-Dimensional Electromagnetic Fields for an Augmented Reality Environment André Buchau 1 * and Wolfgang M. Rucker
More informationModeling and Simulation of Dynamical Systems
Seminar Modeling and Simulation of Dynamical Systems Presented by the IEEE Control Systems Society Santa Clara Valley Sunnyvale, 5 February 2011 1 Session 4 Part I: Background Visualization and Virtual
More informationChapter 24. Creating Surfaces for Displaying and Reporting Data
Chapter 24. Creating Surfaces for Displaying and Reporting Data FLUENT allows you to select portions of the domain to be used for visualizing the flow field. The domain portions are called surfaces, and
More informationParticle-Filter-Based Self-Localization Using Landmarks and Directed Lines
Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Thomas Röfer 1, Tim Laue 1, and Dirk Thomas 2 1 Center for Computing Technology (TZI), FB 3, Universität Bremen roefer@tzi.de,
More informationProjection Center Calibration for a Co-located Projector Camera System
Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,
More informationEnabling a Robot to Open Doors Andrei Iancu, Ellen Klingbeil, Justin Pearson Stanford University - CS Fall 2007
Enabling a Robot to Open Doors Andrei Iancu, Ellen Klingbeil, Justin Pearson Stanford University - CS 229 - Fall 2007 I. Introduction The area of robotic exploration is a field of growing interest and
More informationSegmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator
Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator Li X.C.,, Chui C. K.,, and Ong S. H.,* Dept. of Electrical and Computer Engineering Dept. of Mechanical Engineering, National
More informationData Visualization. What is the goal? A generalized environment for manipulation and visualization of multidimensional data
Data Visualization NIH-NSF NSF BBSI: Simulation and Computer Visualization of Biological Systems at Multiple Scales Joel R. Stiles, MD, PhD What is real? Examples of some mind-bending optical illusions
More informationARCHITECTURE & GAMES. A is for Architect Simple Mass Modeling FORM & SPACE. Industry Careers Framework. Applied. Getting Started.
A is for Architect Simple Mass Modeling One of the first introductions to form and space usually comes at a very early age. As an infant, you might have played with building blocks to help hone your motor
More informationWhere s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map
Where s the Boss? : Monte Carlo Localization for an Autonomous Ground Vehicle using an Aerial Lidar Map Sebastian Scherer, Young-Woo Seo, and Prasanna Velagapudi October 16, 2007 Robotics Institute Carnegie
More informationA Visual Programming Environment for Machine Vision Engineers. Paul F Whelan
A Visual Programming Environment for Machine Vision Engineers Paul F Whelan Vision Systems Group School of Electronic Engineering, Dublin City University, Dublin 9, Ireland. Ph: +353 1 700 5489 Fax: +353
More informationVideo-Based Motion Analysis
Video-Based Motion Analysis Bruce McKay Saint Ignatius College Introduction This session will cover the basics of recording and producing video clips for analysis. Video analysis provides the opportunity
More informationProduction of Video Images by Computer Controlled Cameras and Its Application to TV Conference System
Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol.2, II-131 II-137, Dec. 2001. Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System
More informationAnamorphic Art with a Tilted Cylinder
Anamorphic Art with a Tilted Cylinder Erika Gerhold and Angela Rose Math and Computer Science Salisbury University 1101 Camden Avenue Salisbury Maryland 21801 USA Faculty Advisor: Dr. Don Spickler Abstract
More informationVIRTUAL TRAIL ROOM. South Asian Journal of Engineering and Technology Vol.3, No.5 (2017) 87 96
VIRTUAL TRAIL ROOM 1 Vipin Paul, 2 Sanju Abel J., 3 Sudharsan S., 4 Praveen M. 1 Vipinpaul95@gmail.com 3 shansudharsan002@gmail.com 2 sanjuabel@gmail.com 4 praveen.pravin6@gmail.com Department of computer
More informationData Visualization (CIS/DSC 468)
Data Visualization (CIS/DSC 46) Volume Rendering Dr. David Koop Visualizing Volume (3D) Data 2D visualization slice images (or multi-planar reformating MPR) Indirect 3D visualization isosurfaces (or surface-shaded
More informationImageVis3D User's Manual
ImageVis3D User's Manual 1 1. The current state of ImageVis3D Remember : 1. If ImageVis3D causes any kind of trouble, please report this to us! 2. We are still in the process of adding features to the
More informationMultimedia Technology CHAPTER 4. Video and Animation
CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia
More informationInteractive PTZ Camera Control System Using Wii Remote and Infrared Sensor Bar
Interactive PTZ Camera Control System Using Wii Remote and Infrared Sensor Bar A. H. W. Goh, Y. S. Yong, C. H. Chan, S. J. Then, L. P. Chu, S. W. Chau, and H. W. Hon International Science Index, Computer
More informationVolume Illumination & Vector Field Visualisation
Volume Illumination & Vector Field Visualisation Visualisation Lecture 11 Institute for Perception, Action & Behaviour School of Informatics Volume Illumination & Vector Vis. 1 Previously : Volume Rendering
More informationA Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision
A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has
More informationData Visualization. What is the goal? A generalized environment for manipulation and visualization of multidimensional data
Data Visualization NIH-NSF NSF BBSI: Simulation and Computer Visualization of Biological Systems at Multiple Scales June 2-4, 2 2004 Joel R. Stiles, MD, PhD What is the goal? A generalized environment
More informationLecture overview. Visualisatie BMT. Transparency. Transparency. Transparency. Transparency. Transparency Volume rendering Assignment
Visualisatie BMT Lecture overview Assignment Arjan Kok a.j.f.kok@tue.nl 1 Makes it possible to see inside or behind objects Complement of transparency is opacity Opacity defined by alpha value with range
More informationanimation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time
animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationWhite Paper. Perlin Fire. February 2007 WP _v01
White Paper Perlin Fire February 2007 WP-03012-001_v01 Document Change History Version Date Responsible Reason for Change 01 AT, CK Initial release Go to sdkfeedback@nvidia.com to provide feedback on Perlin
More informationanimation computer graphics animation 2009 fabio pellacini 1
animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to
More informationComputational Strategies
Computational Strategies How can the basic ingredients be combined: Image Order Ray casting (many options) Object Order (in world coordinate) splatting, texture mapping Combination (neither) Shear warp,
More information