Visible Surface Ray-Tracing of Stereoscopic Images

Similar documents
Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Simultaneous Generation of Stereoscopic Views

Interpolation using scanline algorithm

Mach band effect. The Mach band effect increases the visual unpleasant representation of curved surface using flat shading.

CEng 477 Introduction to Computer Graphics Fall 2007

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007

Lecture 15: Shading-I. CITS3003 Graphics & Animation

Pipeline Operations. CS 4620 Lecture 10

Raytracing. COSC 4328/5327 Scott A. King

Computer Graphics. Ray Tracing. Based on slides by Dianna Xu, Bryn Mawr College

03 RENDERING PART TWO

Homework #2. Shading, Ray Tracing, and Texture Mapping

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

CS 325 Computer Graphics

Graphics for VEs. Ruth Aylett

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

CS580: Ray Tracing. Sung-Eui Yoon ( 윤성의 ) Course URL:

Photorealism: Ray Tracing

Global Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Computer Graphics. Lecture 13. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura

Assignment 2 Ray Tracing

Assignment 6: Ray Tracing

The exam begins at 2:40pm and ends at 4:00pm. You must turn your exam in when time is announced or risk not having it accepted.

Computer Graphics II

Ray Tracing. CSCI 420 Computer Graphics Lecture 15. Ray Casting Shadow Rays Reflection and Transmission [Ch ]

CS 488. More Shading and Illumination. Luc RENAMBOT

Ray Tracer Due date: April 27, 2011

Reflection and Shading

Lighting and Shading

Homework #2. Hidden Surfaces, Projections, Shading and Texture, Ray Tracing, and Parametric Curves

Graphics for VEs. Ruth Aylett

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources.

Hidden Surface Removal

9. Visible-Surface Detection Methods

Computer Graphics. Lecture 10. Global Illumination 1: Ray Tracing and Radiosity. Taku Komura 12/03/15

surface: reflectance transparency, opacity, translucency orientation illumination: location intensity wavelength point-source, diffuse source

Ch. 25 The Reflection of Light

Comp 410/510 Computer Graphics. Spring Shading

Visual cues to 3D geometry. Light Reflection and Advanced Shading. Shading. Recognizing materials. size (perspective) occlusion shading

Lecture 21: Shading. put your trust in my shadow. Judges 9:15

Today s class. Simple shadows Shading Lighting in OpenGL. Informationsteknologi. Wednesday, November 21, 2007 Computer Graphics - Class 10 1

Ray Tracing. Kjetil Babington

A Qualitative Analysis of 3D Display Technology

Raytracing & Epsilon. Today. Last Time? Forward Ray Tracing. Does Ray Tracing Simulate Physics? Local Illumination

Shading I Computer Graphics I, Fall 2008

WHY WE NEED SHADING. Suppose we build a model of a sphere using many polygons and color it with glcolor. We get something like.

Ray Tracing Part 1. CSC418/2504 Introduction to Computer Graphics. TA: Muhammed Anwar & Kevin Gibson

Chapter 7 - Light, Materials, Appearance

The University of Calgary

Announcements. Written Assignment 2 out (due March 8) Computer Graphics

COMP environment mapping Mar. 12, r = 2n(n v) v

Graphics Hardware and Display Devices

Introduction Rasterization Z-buffering Shading. Graphics 2012/2013, 4th quarter. Lecture 09: graphics pipeline (rasterization and shading)

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

COMP371 COMPUTER GRAPHICS

Photorealism. Photorealism: Ray Tracing. Ray Tracing

Sung-Eui Yoon ( 윤성의 )

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

CS130 : Computer Graphics Lecture 8: Lighting and Shading. Tamar Shinar Computer Science & Engineering UC Riverside

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Topics and things to know about them:

Raytracing CS148 AS3. Due :59pm PDT

Raytracing 4D fractals, visualizing the four dimensional properties of the Julia set

Computer Graphics. Shadows

Final Project: Real-Time Global Illumination with Radiance Regression Functions

4) Finish the spline here. To complete the spline, double click the last point or select the spline tool again.

CS 428: Fall Introduction to. Raytracing. Andrew Nealen, Rutgers, /18/2009 1

Introduction to Computer Graphics 7. Shading

Unit 1, Lesson 1: Moving in the Plane

Introduction Ray tracing basics Advanced topics (shading) Advanced topics (geometry) Graphics 2010/2011, 4th quarter. Lecture 11: Ray tracing

Ray Tracing. CS116B Chris Pollett Apr 20, 2004.

Pipeline Operations. CS 4620 Lecture 14

Photorealism CS 460/560. Shadows Ray Tracing Texture Mapping Radiosity. Computer Graphics. Binghamton University. EngiNet. Thomas J.

CMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014

Ø Sampling Theory" Ø Fourier Analysis Ø Anti-aliasing Ø Supersampling Strategies" Ø The Hall illumination model. Ø Original ray tracing paper

Virtual Reality for Human Computer Interaction

Volume Illumination. Visualisation Lecture 11. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

BCC Sphere Transition

Elementary Planar Geometry

Lecture 8 Ray tracing Part 1: Basic concept Yong-Jin Liu.

Illumination & Shading

Photorealism. Ray Tracing Texture Mapping Radiosity

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Introduction to Ray-tracing Objectives

Ray Cast Geometry. RayCast Node for Motion, Displacements and Instancing. New to The Ray Cast Geometry node has received an upgrade for 2018.

Artistic Rendering of Function-based Shape Models

CPSC 314 LIGHTING AND SHADING

Ray Tracing CSCI 4239/5239 Advanced Computer Graphics Spring 2018

All forms of EM waves travel at the speed of light in a vacuum = 3.00 x 10 8 m/s This speed is constant in air as well

Lecture 11: Ray tracing (cont.)

Fog and Cloud Effects. Karl Smeltzer Alice Cao John Comstock

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

The exam begins at 2:40pm and ends at 4:00pm. You must turn your exam in when time is announced or risk not having it accepted.

CS 563 Advanced Topics in Computer Graphics Lecture 2: Bare-Bones Raytracer. by Emmanuel Agu

Computer Graphics. Shading. Based on slides by Dianna Xu, Bryn Mawr College

Global Rendering. Ingela Nyström 1. Effects needed for realism. The Rendering Equation. Local vs global rendering. Light-material interaction

Transcription:

Visible Surface Ray-Tracing of Stereoscopic Images Stephen J. Adelson Larry F. Hodges Graphics, Visualization Usability Center College of Computing Georgia Institute of Technology Atlanta, Georgia 30332 hodges@cc.gatech.edu steve1@cc.gatech.edu Abstract Ray-tracing is a well-known method for producing realistic images. If we wish to view a ray-traced image stereoscopically, we must create two distinct views of the image: a left-eye view a right-eye view. The most straight-forward way to do this is to ray-trace both views, doubling the required work for a single perspective image. We have developed a reprojection algorithm that produces stereoscopic images efficiently with little degradation in image quality. In this paper, we derive the necessary stereoscopic separation technique required to transform a ray-traced left-eye view into an inferred right-eye view. Minor changes in the technique needed to exp from single to multiple rays per pixel are noted. Finally, results from evaluating several rom scenes are given, which indicate that the second view in a stereo image can be computed with less than 16 percent of the effort of computing the first. INTRODUCTION Ray-tracing, while widely-used to produce realistic scenes, is notorious for the amount of computation required. Even if we ignore reflection transparency, the algorithm is computationally intensive. If we desire to create a stereoscopic ray-traced image, we must have two slightly different perspective views of the same scene, potentially doubling the required work. Therefore, any savings which can be derived for stereoscopic ray-traced images would be useful. Badt has described a speedup technique for animation of raytraced scenes.[1] As will be shown, we can think of the two views of a stereoscopic image as two animation frames where the second viewing position differs from the first position only in the horizontal axis. Given this model, the Badt technique will be refined in this paper for use with stereoscopic images achieve large computational savings in creating the second view of a ray-traced stereo pair. Some preliminary work has been performed previously in this area [4], but only in implementing Badt's algorithm for stereoscopic views, not using the modifications implied by stereo pair creation. PERFORMANCE MEASUREMENTS Our purpose is to develop an algorithm that simultaneously operates on the two different images of a scene by extending a single-image algorithm. Using the single-image algorithm, if we can complete an operation on a scene in time, t, then the upper bound on completing that operation simultaneously on two slightly different views is approximately 2t. The lower bound is t (i.e., the algorithm operating on two images can be done as efficiently as one image). We have chosen to measure computational savings in the text of the paper by comparing the simultaneous algorithm with the time beyond that needed to compute a single view. If we view the time t to compute one eye view as our benchmark the increase in time beyond t as the time needed to compute the second view, then the decrease in computing time for the second view would range from 0% (total computing time for the second view = t) to 100% (total computing time for the second view = 0). ELEMENTARY RAY-TRACING In ray-tracing, a light ray is traced backward from the viewpoint through each pixel of the screen. If an object is struck by a ray, several rays are sent off from the intersection point: shadow rays toward the light sources to indicate shadows, a reflection ray at an equal but opposite angle relative to the plane perpendicular to the intersection normal, a transparency ray through the object. An illustration of these rays appears in figure 1. The reflection transparency rays may strike other objects send out yet more rays, from all these rays the correct color is synthesized. We will use a simple model of ray-tracing assume that all objects are diffuse reflectors with specular highlights that there is a single light source. This means that we ignore the reflection transparency rays deal only with the shadow ray. The color of a pixel can then be determined using only two rays: the initial intersection ray one shadow ray. STEREOSCOPIC VIEWS A stard single view perspective projection is to project the three-dimensional scene onto the x-y plane with a center of projection at (0, 0, -d). Given a point P

Light Source R Viewing Position S Pixel Location T S R Figure 1. Ray-Tracing Rays, S (shadow ), R (reflection), T (transparency) = (x p,y p,z p ), the projected point is (x s, y s ) where x s = x p d/(z p +d) (1) y s = y p d/(z p +d). (2) For stereoscopic images we require two differently projected views of the image. Each of the observer's eyes is presented with a view of a scene from different (usually horizontally displaced) positions. For our purposes we will derive these views from two different centers of projection, but other methods are possible. 1, 5, 6 Assume a left-hed coordinate system with viewplane located at z=0 two centers of projection: one for the left-eye view () located at (-e/2, 0, -d) another for the right-eye view () located at (e/2, 0, -d) as shown in figure 2. A point P = (x p, y p, z p ) projected to has projection plane coordinates P sl = (x sl, y sl ) where x sl = (x p d - z p e/2)/(d + z p ) (3) y sl = y p d/(d + z p ). (4) The same point projected to has projection plane coordinates P sr = (x sr, y sr) where x sr = (x p d + z p e/2)/(d + z p ) (5) y sr = y p d/(d + z p ). (6) Note that the y sl = y sr so that this value need be computed only once. Also note that the terms in x sr x sl are identical differ only by whether the terms in the numerator are added or subtracted. Assuming that we have already computed the left-eye view have access to its component terms, only one addition one multiplication are necessary to compute the right-eye view: x sr = x sl + e(z p / (d + z p )). (7) In other words, the point will move horizontally between the views by a distance dependent on the depth z of the point to be projected, the distance d from the viewing position to the projection plane, the distance e between the two viewing positions. The illumination intensity of an object is dependent only upon the position of the object light sources, 2

not the viewing position. However, specular highlights are dependent on the position of the viewer. If we use the familiar Phong highlighting term [3] then the highlight can be computed using the formula: F(N o B) a where F is a constant for the object, N is the unit normal vector at the intersection point (x n, y n, z n ), B is the normalized sum V+L, where V is the vector to the viewing position (x v, y v, z v ) L is the vector to the light source (x l, y l, z l ), a is the Phong glossiness factor, o is the dot product. +z axis P = (xp,yp,zp) Figure 2. (xsl,ysl) The term (N o B) is therefore +y axis (xsr,ysr) Projection Plane LCop Stereoscopic Perspective Projection {x n (x v +x l ) + y n (y v +y l ) + z n (z v +z l )} / Ö[(x v +x l ) 2 + (y v +y l ) 2 + (z v +z l ) 2 ] (8) Let V be the vector from the intersection point to the left center of projection, then the Phong term for the right view is {ex n + x n (x v +x l ) + y n (y v +y l ) + z n (z v +z l )} ---------------------------------------------------------------- Ö[(x v +x l ) 2 + (y v +y l ) 2 + (z v +z l ) 2 + e(2(x v +x l )+e)] (9) If we represent the numerator of equation (8) as P 1, the denominator squared of the same equation as P 2, (x v + x l ) as P3, then the Phong highlighting term for the left view is P 1 / ÖP 2 (10) the right view term is (ex n + P 1 ) / Ö[P 2 + 2eP 3 + e 2 ]. (11) We can compute the right-eye specular term from the left-eye term with only seven additional operations. PREVIOUS WORK Badt describes a method for using reprojection to create a ray-traced scene in which the viewpoint has moved some small distance.[2] Reprojection involves moving the pixels in one view to their position in the second cleaning up the image by recalculating only those pixels whose value is unknown or in question after the viewpoint has translated. Since a stereo pair is equivalent to two images in which the viewpoint moves a small horizontal distance, it is instructional to see the original algorithm before it is modified for stereoscopic images. All pixels from the original frame, or view, are reprojected to their new position in any order. Simultaneously, a hit-point count image is generated. This image represents the number of "hits" or reprojections to a given pixel-position by adding one to the pixel-position for every reprojection to that pixel. Also, some pixels may receive no hits. Both the pixel image the hit-count image are stored to disk. The algorithm is concerned with four kinds of pixels, as represented by the hit-point count image: missed pixels, or those that received no hits; overlapped pixels, which received multiple hits; bad pixels, whose hit count is one yet do not match their surroundings; good pixels, whose hit count is one do match. The first two types of pixels are easy to find in the hit-point count image can be repaired by again ray-tracing, or re-tracing, the pixel. The second two are more difficult to distinguish. Previous papers have used the idea of a filter centered on a pixel to determine whether a pixel is good or bad. The hit count of all pixels under the filter are added together. If the combined value differs from the expected number of hits by a fixed value, then the pixel is considered bad retraced. Badt shows that his algorithm could save a significant amount of effort when creating the second image; in his tests, almost 62% of the reprojected pixels were judged to be good did not require re-tracing. Badt's technique can be used as is to create stereo pairs. A paper by Ezell Hodges reported the results of such a endeavor; they showed that an average of 73% of the reprojected pixels were good. 4 However, when we use a modified version of this technique on stereoscopic images, the bad overlapped pixel problems can be eliminated, as can the necessity for a filter, the resulting performance will be dramatically improved. ELIMINATING THE BAD AND OVERLAPPED PIXEL PROBLEMS As stated previously, we must consider the image artifacts in a reprojected image caused by overlapped pixels, missed pixels, bad pixels. It will now be shown that an algorithm can be derived for stereoscopic reprojection so that we only need concern ourselves with missed pixels, whose solution entails re-tracing the pixel in question.

If we calculate the pixels of the left image of a stereo pair first, the second image will have each corresponding pixel at a point equal or to the right of the original pixel, as per equation (7). We also know from equations (4) (6) that the pixel will not change its y value. We can therefore calculate both views of the image simultaneously, scan-line by scanline. Eliminating the Overlapped Pixel Problem The overlapped pixel problem occurs because more than one pixel in the first image is mapped to the same pixel in the second image. However, since we are able to create this image a scan-line at a time, a small data structure can be created to hold the points reprojected from the left image to the right. In its simplest form, this structure need only have an entry for each pixel in a scan-line with each entry containing a Boolean flag representing a reprojection to the corresponding pixel a value for its color. Using this data structure, we can overwrite reprojected pixel values, resulting in the correct value at the end of processing. PIX2 (x2, y, 0) T = e / (x2 - x1 + e) T0 = e / (x3 - x1 + e) PIX1 (x1, y, 0) PIX3 (x3, y, 0) Consider the ray from the through a pixel PIX 1 at (x 1, y, 0). The equation of the z position of this ray is z = -d + T * d = d * (T - 1) (12) where T ³ 1 means that the ray is in image space. An object which intersects this ray in image space will appear in PIX 1 in the right-eye view. Consider another ray from the through PIX 2 at (x 2, y, 0). This ray will intersect the previous ray when the x values of the rays coincide: x l = -e/2 + T(x 2 + e/2), x r = e/2 + T(x 1 - e/2) -e/2 + T(x 2 + e/2) = e/2 + T(x 1 - e/2) T(x 2 + e/2) - T(x 1 - e/2) = e T(x 2 - x 1 + e) = e T = e / (x 2 - x 1 + e) (13) As illustrated in Figure 3, a point on the surface of an object will appear in PIX 2 in the left-eye view in PIX 1 in the right-eye view if only if there is an object surface at T = e / (x 2 - x 1 + e), T ³1. Suppose we have another pixel, PIX 3, at (x 3, y, 0), x 2 < x 3 x 1. A ray from the will intersect our ray from through PIX 1 at T 0 = e / (x 3 - x 1 + e). (14) If there is an object surface at this point, it will appear in PIX 3 in the left-eye view PIX 1 in the right-eye view. We now are faced with the overlapped pixel problem. However, note that the denominator of equation (8), x 2 - x 1 + e, is smaller than that of equation (14), x 3 - x 1 + e. Hence, T 0 < T the object surface at T 0 is closer to the viewing plane than the surface at T. Figure 3. Solving the Overlapped Pixel Problem Therefore, if we process pixels in a left to right fashion overwrite the scan-line record as we go, we will always end up with the correct object reprojected to the pixels in the right-eye view. When pixels are reprojected from the left-eye view to the right-eye view, they move according to equation (7). This distance is dependent upon the z value of the image in the original pixel. It is possible, however, that the z values of two adjacent pixels in the left-eye view are such that the second will be reprojected further than the first, leaving a gap of 1 to e-1 pixels. Further, it is possible that other pixels may have reprojected into this gap; these gap-filling pixel values may or may not belong in the right-eye view. It is these questionable pixels that cause the bad pixel problem. The bad pixel problem is illustrated in figures 4a 4b. In figure 4a, we can see that there is indeed a true gap the farther object A should be reprojected into this area. In figure 4b, however, there is an intervening object which would block A were the right-eye view fully ray-traced. This problem was previously solved by Badt by using the hit-point count image a filter to test each pixel. It is hoped that the bad pixels will not totally fill the gap surrounding missed pixels will disclose the location of improper pixels. Unfortunately, it is fairly easy to imagine a case where improper pixels are missed, or where they are caught but the filter causes many good pixels to be re-traced. This problem can be solved for stereoscopic images by eliminating the projected values in intervening pixels when two adjacent pixels in the left-eye view, x

A A Figure 4a. A should appear in righteye view. x+1, are reprojected to the right-eye view such that newx(x+1) - newx(x) > 1, where newx is the reprojection equation (1). Since pixels are often small relative to the objects they are displaying, the case of figure 4b seems much more likely than the case of 4a. Certainly, it is possible that this solution eliminates several good pixels, but the possibility of waste when displaying spheres or other smooth surfaces appears slight. We shall see later that the number of pixels reset in this process, on average, is small. In any case, it is absolutely impossible for bad pixels to exist in the right-eye view when using this method. FINAL ALGORITHM We now define the final algorithm for creating stereo pairs of ray-traced images. Assume we are creating both views in tem or have access to pixel information of the left image in left to right fashion for each scan-line. For M by M resolution, we need a scan-line record line_rec of length M containing the following information for each pixel j, 0 j < M: HIT : flag which is TRUE if there is a reprojection to pixel j, FALSE otherwise COLOR : the color for the right -eye view of pixel j. When we are creating images in tem or we know the value of e in advance, COLOR can be computed as we go. Otherwise, we must also have access to information which will allow us to create a color for the right-view view; this would include light-point position, Phong