Conveying 3D Shape and Depth with Textured and Transparent Surfaces Victoria Interrante

Similar documents
Kwan-Liu Ma and Victoria Interrante

Introduction. Illustrative rendering is also often called non-photorealistic rendering (NPR)

Conveying Shape with Texture: an experimental investigation of the impact of texture type on shape categorization judgments

Enhancing Transparent Skin Surfaces with Ridge and Valley Lines *

Using surface markings to enhance accuracy and stability of object perception in graphic displays

Non-Photo Realistic Rendering. Jian Huang

Conveying the 3D Shape of Smoothly Curving Transparent Surfaces via Texture

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

How do we draw a picture?

ITS 102: Visualize This! Lecture 7: Illustrative Visualization. Introduction

LEVEL 1 ANIMATION ACADEMY2010

Kinetic Visualization A Technique for Illustrating 3D Shape and Structure

SEOUL NATIONAL UNIVERSITY

Visualizing intersecting surfaces with nested-surface techniques

Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality

Perceived shininess and rigidity - Measurements of shape-dependent specular flow of rotating objects

Non-Photorealistic Experimentation Jhon Adams

Lecture 15: Shading-I. CITS3003 Graphics & Animation

Computer Graphics. Shading. Based on slides by Dianna Xu, Bryn Mawr College

Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons. Abstract. 2. Related studies. 1.

Visual Perception. Visual contrast

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Illustrating Transparent Surfaces with Curvature-Directed Strokes

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Exaggerated Shading for Depicting Shape and Detail. Szymon Rusinkiewicz Michael Burns Doug DeCarlo

Working with the BCC Bump Map Generator

NPR Techniques for Scientific Visualization

CSE 167: Introduction to Computer Graphics Lecture #6: Colors. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Feature Lines on Surfaces

CSC 2521 Final Project Report. Hanieh Bastani

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Computing Covering Polyhedra of Non-Convex Objects

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Visualisatie BMT. Rendering. Arjan Kok

Interactivity is the Key to Expressive Visualization

Graphics Hardware and Display Devices

Perceiving and representing shape & depth

RINGS : A Technique for Visualizing Large Hierarchies

RGBN Image Editing Software Manual

AutoCAD DWG Drawing Limitations in SAP 3D Visual Enterprise 9.0 FP02

Art and Science in Visualization Victoria Interrante

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

Chapter 7. Conclusions and Future Work

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

BCC Sphere Transition

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Creating Digital Illustrations for Your Research Workshop III Basic Illustration Demo

Towards a Perceptual Method of Blending for Image-Based Models

Chapter 4. Chapter 4. Computer Graphics 2006/2007 Chapter 4. Introduction to 3D 1

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

Medical Visualization - Illustrative Visualization 2 (Summary) J.-Prof. Dr. Kai Lawonn

Effects of Texture, Illumination and Surface Reflectance on Stereoscopic Shape Perception

Simultaneous surface texture classification and illumination tilt angle prediction

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

Nonphotorealistic Rendering of Medical Volume Data

CPSC 532E Week 6: Lecture. Surface Perception; Completion

Image Base Rendering: An Introduction

Non-photorealistic Rendering

What is Computer Vision?

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)

Paint by Numbers and Comprehensible Rendering of 3D Shapes

Perspective and vanishing points

Light Field Occlusion Removal

Shading Languages. Seminar Computer Graphics. Markus Kummerer

Perceptually Optimizing Textures for Layered Surfaces

Light source estimation using feature points from specular highlights and cast shadows

Announcements. Written Assignment 2 out (due March 8) Computer Graphics

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Region of interest specification. with a region of interest (ROI) thus. substantially better results by applying. the input texture before LIC rather

Non-Photorealistic Rendering

Human Motion Detection and Tracking for Video Surveillance

Shading of a computer-generated hologram by zone plate modulation

Perceptual Quality Improvement of Stereoscopic Images

Scalar Visualization

Illustrating Transparency: communicating the 3D shape of layered transparent surfaces via texture

Natural Viewing 3D Display

Colour Reading: Chapter 6. Black body radiators

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping

Local Image preprocessing (cont d)

A Shadow Volume Algorithm for Opaque and Transparent Non-Manifold Casters

Directional Enhancement in Texture-based Vector Field Visualization

Chapter Four: Feature Line Textures

Information Visualization. Overview. What is Information Visualization? SMD157 Human-Computer Interaction Fall 2003

Volume Illumination and Segmentation

Limits on the Perception of Local Shape from Shading

Visualizing Dynamic Transformations In Interactive Electronic Communication Patricia Search

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CEng 477 Introduction to Computer Graphics Fall 2007

Sketchy Illustrations for Presenting the Design of Interactive CSG

Computer Vision. Introduction

Joint design of data analysis algorithms and user interface for video applications

Tri-modal Human Body Segmentation

Iray Uber Shader Properties. Workshop Reference Guide

Hot Topics in Visualization

PSYCHOLOGICAL SCIENCE

Visualizing DIII-D Tokamak Magnetic Field Lines

Transcription:

Conveying 3D Shape and Depth with Textured and Transparent Surfaces Victoria Interrante In scientific visualization, there are many applications in which researchers need to achieve an integrated understanding of the three-dimensional shapes and relative depth distances of multiple overlapping objects. Transparent surface rendering has the potential to be a useful device, in many of these cases, for simultaneously portraying multiple superimposed structures in a single image, so that their complex spatial relationships can be more accurately and comprehensively inferred. The challenge is to determine how to portray the outermost objects in such a way that they can be both effectively seen, and also seen through, at the same time. In this article we briefly review a variety of issues in transparent surface rendering, and describe some novel graphical techniques that aim to enhance the representation of external transparent surfaces, while maintaining the visibility of internal structures, through the careful design and application of sparsely distributed opaque surface markings. Integral to this discussion is an analysis of the effects that various texture pattern characteristics, such as orientation, have on surface shape perception. The Trouble with Plain Transparency In computer graphics images, as in everyday experience, smoothly finished external transparent surfaces can be surprisingly difficult to adequately perceive. Not only is the depth distance between an outer transparent surface and inner opaque entities typically difficult to assess, but even the depth order relationships can at times appear ambiguous, and the subtle shading cues that might otherwise reveal the 3D shape of a gently curving external transparent surface are often masked by the gradients of transmitted intensity of the underlying opaque inner structures. Essentially, there is a lack of sufficient information in the visual stimulus to allow an observer to accurately interpret the either the geometry or the layout of the multiple objects in the scene. Complicating matters is that, in normal stereo vision, specular highlights, are not perceived to lie on a reflective surface but rather appear to float in space either above the surface, if it is concave, or below the surface if it is convex 1. Incident light is specularly reflected by smooth, shiny materials in a preferred direction determined by the angle of incidence with respect to the surface normal. Thus, on a curved surface, the highlight due to a given light source will appear to lie in a different position on the surface when viewed from one eye than when viewed from the other. Instead of perceiving the highlight to be in two locations at once, our visual system forms a single unified percept of the highlight floating in space. Psychologists have shown that people can use the direction of the displacement of the highlight to disambiguate bumps from depressions 1.

Choosing an Appropriate Shading Model for Rendering a Transparent Surface There are several common shading models that can be used to represent transparent surfaces. The choice of shading model is important because it affects the type of transparent surface that is simulated. One of the most commonly used models is additive transparency, in which the final intensity I at a point is defined as a linear combination, weighted by the surface opacity α (0,1), of the intensity I f of the transparent surface and the intensity I b of the background: I = I f α + I b (1-α). This model results in surfaces that appear to be made of gauze; when the surface is folded upon itself multiple times, the result converges to the color of the transparent material. An alternative is subtractive or multiplicative transparency, in which the transparent surfaces are modeled as filters that impede the transmission of light, so that as multiple surfaces are overlapped the result gets progressively darker, tending towards black. Top: additive transparency; Bottom: subtractive transparency Emphasizing Essential Lines By adding detail, in the form of sparse opaque markings, to an external transparent surface, we have the potential to more effectively convey both its intrinsic shape features and its depth distance from underlying structures. The challenge is to decide what sort of markings will be most appropriate, and to define an algorithm for determining how to place them over the surface. Silhouette lines are appropriate to enhance on faceted objects, but on smoothly curving surfaces they can be problematic because they will not lie in the same place over the surface in the views from each eye. On complicated surfaces that exhibit multiple inflections of Gaussian curvature, valley lines can be useful for conveying important additional shape information. These lines are defined as the locus of points that lie at minima of negative curvature in the direction of greatest normal curvature over the surface, and their perceptual relevance is affirmed by research that suggests that people tend to subdivide objects into parts along their valley lines 3. The figure below illustrates the use of valley lines on a transparent surface in a scientific visualization application 8. Above left: a typical rendering of some of the multiple surfaces of interest in a radiation therapy treatment plan for prostate cancer. Above right: an enhanced rendering 8, in which opaque feature lines have been added along the major creases in the skin surface, with the intent to clarify the structure of the form and draw attention to the location of sensitive soft tissue structures that have to be kept outside of any beam path.

Clarifying the Flow of the Form Not all smoothly curving surfaces have shapes that can be well-captured by a small set of feature lines. An alternative approach 7 is to apply a texture of uniformly distributed 10 sparse opaque markings over the transparent surface. However the choice of texture pattern, and how it is applied over the surface, has to be made with care. Lest the texture do more harm than good, it must be aesthetic, unobtrusive, of a style appropriate to the application, and applied in a way that emphasizes rather than masks the curvature of the form. Inspired by the example of line drawings in medical illustration, we have developed an algorithm 6, illustrated in the figure below, in which 3D line integral convolution 11 is used to synthesize a solid texture of sparse opaque strokes following the vector field of principal directions over the transparent surface. Above left: a prototypical scientific dataset involving layered transparent surfaces a level surface of radiation dose enclosing an opaque treatment region; Above right: an enhanced rendering 6 of this data, in which the outer transparent surface has been augmented with a see-through texture designed to emphasize its shape. Conveying Shape Through Texture on Opaque Surfaces If one could design the perfect texture pattern to apply to any smooth surface in order to enable its shape to be more accurately and intuitively perceived, what would the characteristics of that texture pattern be? Recent research suggests that patterns with a strong directional component show shape best when the texture is everywhere oriented in the direction of maximum normal curvature (the first principal direction). Textures that contain significant geodesic curvature, or turn in the surface, tend to mask surface shape. Textures that follow both principal directions are beneficial for practical reasons, because it s extremely difficult to define a consistent orientation for the first principal vector field, and in some cases no global solution exists, with the result that the texture pattern appears to rotate 90 degrees everywhere the first and second principal directions switch places. Recently developed surface texture synthesis algorithms 2 allow one to apply a wide class of 2D patterns over arbitrary polygonal models without seams or stretching, and in such a way that the pattern is constrained to follow a predefined vector field at a per-pixel level over the surface. This algorithm was used to create the images in the second figure below.

Studies have found that how a texture is oriented over a surface can significantly affect how accurately the surface shape can be perceived anisotropic texture tends to mask surface shape when the direction of the anisotropy is not aligned with 4,5 the principal directions of the form. From left to right: An isotropic texture, a uniformly oriented anisotropic texture, an anisotropic texture that turns in the surface, an anisotropic texture that follows the first principal direction at every point. Textures that follow both principal directions appear to show shape slightly more effectively than textures that follow only 9 one. Above left: A smoothly shaded, untextured surface; Above center: The same surface textured with a line integral convolution pattern that follows the first principal direction vector field at every point; Above right: The same surface, textured with an orthogonal grid pattern that is everywhere aligned with both the first and second principal directions.

Notes: 1. Andrew Blake and Heinrich Bülthoff (1991) Shape from Specularities: computation and psychophysics, Philosophical Transactions of the Royal Society of London, B, 331, pp. 237-252. 2. Gabriele Gorla, Victoria Interrante and Guillermo Sapiro (2002) Texture Synthesis for 3D Shape Representation, IEEE Transactions on Visualization and Computer Graphics, to appear. 3. Donald D. Hoffman and Whitman A. Richards (1984) Parts of Recognition, Cognition, 18, pp. 65-96. 4. Victoria Interrante, Sunghee Kim and Haleh Hagh-Shenas (2002) Conveying 3D Shape with Texture: recent advances and experimental findings, Human Vision and Electronic Imaging VII, SPIE 4662. 5. Victoria Interrante and Sunghee Kim (2001) Investigating the Effect of Texture Orientation on the Perception of 3D Shape, Human Vision and Electronic Imaging VI, SPIE 4299, pp. 330-339. 6. Victoria Interrante (1997) Illustrating Surface Shape in Volume Data via Principal Direction-Driven 3D Line Integral Convolution, Computer Graphics, Annual Conference Series (ACM SIGGRAPH 97), pp. 109 116. 7. Victoria Interrante, Henry Fuchs and Stephen Pizer (1997) Conveying the 3D Shape of Smoothly Curving Transparent Surfaces via Texture, IEEE Transactions on Visualization and Computer Graphics, 3(2): 98 117, April-June 1997. 8. Victoria Interrante, Henry Fuchs and Stephen Pizer (1995) Enhancing Transparent Skin Surfaces with Ridge and Valley Lines, Proc. IEEE Visualization 95, pp. 52 59. 9. Sunghee Kim, Haleh Hagh-Shenas and Victoria Interrante (2002) Showing Shape with Texture, IEEE Visualization 2002, poster presentation. 10. In our experience, employing unevenly distributed texture has been problematic; when some areas are left devoid of texture while others are uniformly covered, the empty areas tend to be perceived as holes. 11. Detlev Stalling and Hans-Christian Hege (1995) Fast and Resolution Independent Line Integral Convolution, SIGGRAPH 95 Conference Proceedings, Annual Conference Series, pp. 249-256.