Is 3-D TV preparing the way for holographic TV?

Size: px
Start display at page:

Download "Is 3-D TV preparing the way for holographic TV?"

Transcription

1 Is 3-D TV preparing the way for holographic TV? V. Michael Bove, Jr., Quinn Y. J. Smithwick, James Barabas, and Daniel E. Smalley Object-Based Media Group, MIT Media Laboratory, Room E15-368B, 20 Ames St., Cambridge, MA USA ABSTRACT Holographic television, or holo-video, has been seen by many as the ultimate development not only of holography but of electronic visual communication generally. To make widespread, successful holo-video, four things are needed: 1) content, 2) a distribution mechanism, 3) sufficient processing at the receiver, and 4) suitable electro-optics at the display, and all of these must be available at prices suitable for consumers. In the past one to two years, there has been a great interest in 3-D television, but few researchers seem to have noted that many of the recent developments in 3-D TV are also solving or at least pointing the way to solving problems associated with holo-video. We examine particularly relevant developments in content capture/creation, content representation (including standardization activities), and the increased suitability of graphics processors for 3-D applications, and connect these with work at the MIT Media Laboratory in developing a holo-video display suitable for consumer use. Keywords: synthetic holography, 3-D display, holographic video, computer graphics, graphics processors 1. INTRODUCTION The technological history of 3-D television dates back more than sixty years (see, e.g., [1]) however despite several generations of attempts at commercialization, 3-D TV has so far failed to take hold as a widespread consumer product. Over the past several years, significant engineering, content-creation, standardization, and marketing effort has been focused on home 3-D TV, a situation which seems to have at least the following causes: the transition of broadcasting in many countries to digital and in many cases high-definition leaves manufacturers and content creators searching for the next form of premium television the current generation of consumers is not familiar with the long, often-tawdry, and largely unsuccessful history of 3-D TV and cinema display technology is undergoing rapid technological change, which means that displays are being replaced rapidly and a window of opportunity exists for including 3-D functionality new delivery mechanisms such as internet video and Blu-ray can potentially carry 3-D content and are more flexible and extensible than previous media lowered cost of various technologies such as shutter glasses, digital interconnects, and high-refresh-rate displays much current content such as CGI programs and video games is already suitable for 3-D display, while reduced size and cost of high-quality video cameras makes it easier to build stereoscopic camera rigs in the gaming and internet video domains, off-the-shelf graphics processing units (GPUs) and video decoders are now fast enough to handle stereo imagery Some notable recent events include the adoption by MPEG of the multiview video coding (MVC) extension to the H.264 advanced video coding (AVC) standard, the creation of the 3D@Home Consortium, the launching by the Society of Motion Picture and Television Engineers of a 3-D Home Entertainment Task Force, and the evaluation by the Bluray Disc Association of a 3-D extension proposed by Panasonic. An EU-funded consortium called MOBILE3DTV is developing standards and visual optimizations for transmitting stereoscopic video over the DVB-H system to mobile devices equipped with autostereoscopic screens.

2 All the efforts discussed in this section involve stereoscopic television meaning that they use only the depth cues of binocular parallax and convergence (though there have from time to time been experiments and even commercial trials of home television employing less-well-known psychophysical effects such as the Pulfrich Effect [2]). But they have relevance to holographic television whose ultimate goal is less about tricking the eye and more about reconstruction of light wavefronts identical to those that would have originated from a real scene as they not only ready the market for high quality dynamic 3-D imagery but also may allow easier construction of a successful holo-video system. The notion of holographic television still seems to have a powerful hold on the public s imagination, as demonstrated by recent excitement over technologies that are called holographic but don t use holographic principles, such as CNN s holographic interview system which is really just a 360-degree camera array combined with a video compositor. The fundamental question at the heart of designing a stereoscopic display system is, How do we get the correct view to the correct eye? while design of a transmission system also brings in questions of backward compatibility (Will viewers with 2-D systems see normal TV? Will already-deployed 2-D codecs, bitstream formats, and other legacy hardware and software be able to handle the 3-D content?) and efficiency (Can we take advantage of the significant redundancy between the stereo views?) With respect to making each eye see an appropriate parallax view, the fundamental approaches have been the same for many years (1950 reference [1] discusses and evaluates most of these principles, and Okoshi s 1976 book [3] explores them in depth), but changes in technology affect the relative practicality and cost of each. While it may not have been true even in the recent past, current electronic display systems generally contain sufficient memory and processing that the 3-D display process can be decoupled from the data representation (e.g. temporally-alternating transmitted views can be used on a display that requires them to be shown simultaneously on alternating columns of pixels). Well-known methods for controlling the views that each eye sees include: Color multiplexing: Monochrome views can be reproduced in complementary colors (commonly red/cyan) to create an anaglyph image and viewed through correspondingly-colored eyeglass lenses. Such a system does not, of course, permit color imagery, and can be fatiguing over long periods of time. Time multiplexing: If a video display has sufficiently high temporal refresh rate to avoid visible flicker, left and right views can be alternated in time and viewed through glasses with an active synchronous shutters. The cost of these glasses has dropped significantly in recent years but they are still somewhat bulky and require batteries. Liquid-crystal monitors are now available with refresh rates of 120Hz and higher, adding to the recent interest in such an approach. If the glasses can track the viewer s position and adjust the shutter phasing appropriately, there is no reason that a monitor with high enough refresh rate (LG has recently shown a 480Hz LCD panel) couldn t support more than two parallax views, though the author is not aware of any demonstrations of this sort. Polarization multiplexing: Projectors can be made to alternate their polarizations temporally, two orthogonally polarized projectors can be used simultaneously, or direct-view displays can be built such that columns of pixels have differing polarizations; in each case inexpensive passive glasses can be worn by viewers, though at the cost of increased display complexity. Angular multiplexing (autostereo): If the display itself has the ability to steer views, the viewer needn t wear special eyewear to perceive a stereoscopic image. Parallax barriers and 1-D or 2-D lenticular arrays are the usual mechanisms for performing this function, but holographic video displays (to be discussed in the next section) have this property as well. The parallax views also need to be represented in a suitable fashion for transmission and storage. In order of increasing complexity, approaches include: Simulcasting: Two or more parallel streams can be transmitted using standard codecs. Such an approach is compatible with existing standards and 2-D displays (which can simply pick out one stream), but it is inefficient as it takes no advantage of the statistical and perceptual redundancy between views. Interleaving in one stream: Views can be anamorphically squeezed into one image frame for an existing codec or can be alternated temporally (H.264 AVC supports a flag that identifies L/R and predicts L views and R views separately). This approach can be compatible with existing codecs but not with 2-D displays. Multiview coding: In a predictive coder (such as the multiview extension to H.264) frames for each view can be predicted not just from temporal neighbors but also from neighboring views. This technique is more effi-

3 cient than the preceding methods (though there are limits to how much redundancy such a coder can find), and a decoder for a 2-D display can simply pick out a single view. 2-D stream plus depth map: A very efficient and display-independent representation of 3-D images can be made by sending a single 2-D view and a depth map which is used to create differential parallax shifts of image regions at the display. The depth map might even be transmitted in MPEG private data, so that ordinary 2- D displays can ignore it. As might be imagined, such a representation has the disadvantage that occlusions cause missing regions that must somehow be synthesized [4], and the complications of computing the depth map during image capture and resynthesizing the parallax views at the display. Full 3-D representation: Most display-independent of all is transmitting a full 3-D texture-mapped model of the scene, from which a GPU at the display can synthesize parallax views appropriate for the display. There isn t currently a broadly adopted standard for transmission of such content in real time though on-line video games have demonstrated the possibility of doing so and creating 3-D models for real scenes continues to be impractical so this approach is suitable mostly for synthetic imagery. 2. RELEVANCE TO HOLO-VIDEO Many research groups are currently exploring the creation of dynamic 3-D electronic displays based on diffraction; the Object-Based Media Group at the MIT Media Laboratory has added the explicit requirement of making a display that is suitable for consumers in the relatively near term. In particular, this means that the display must be compact and rugged, manufacturable for a few hundred dollars, and able to be driven by standard hardware (i.e. the graphics processor in a PC or video game system) over standard video signal interfaces. As a consequence, we need to bear in mind developments and trajectories in related consumer technologies as we hope to make use of as many of these as possible. A consumer holo-video system (as indeed any consumer video system) requires for success the availability of four key elements: content a distribution mechanism sufficient processing at the receiver suitable electro-optics at the display To the degree that industry is already creating some of these, it makes the development of an end-to-end holo-video system easier as it removes the need for the researcher to build all parts of the chain. Recall that a holo-video display system, because of its ability to synthesize lightfields, is able to reproduce imagery captured for a two-or-more-view stereoscopic display, or for an integral display. Thus there is certain to be imagery in distribution that can be shown on a holo-video display, though even better imagery is available from content such as games that is already distributed as 3- D models assuming that a receiver can convert parallax views or 3-D models into diffraction patterns in real time. An example of real-time conversion of integral images into a holographic lightfield is given in reference [5]. Cinematographic training and technical literature are also beginning to reemphazise the craft of 3-D cinematic storytelling (see for instance [6]), and the scene composition and editing lessons apply equally well to holographic video. To be a bit more specific about the holographic display of stereo video sources, since a holographic video display allows very fine directional control of light, it is quite simple to cause a left-eye view and a right-eye view to come from the display at symmetrical angles on each side of the optical center of the screen. To see the images, a viewer would have to be directly centered on the screen and at a specific distance. But it s also possible to cause the views to come out at a range of directions on each side, which relaxes the distance restriction. It is further possible to support multiple viewers (or to give a single viewer the option to be elsewhere than centered on the screen) by repeating ranges of directions L/R/L/R/L/R, et cetera. However in that case half the viewing positions cause the viewer to see a pseudoscopic image, and it would perhaps be better to have a blank image at a narrow range of directions after each L/R pair, i.e. L/R/blank/L/R/blank/L/R, such that for a reasonable range of viewing distances if a viewer can see a view in each eye the result will be correct stereo (here presumably the viewer will move his or her head slightly to such a position). See Fig. 1.

4 Our research has for many years taken the viewpoint that end-to-end transmission of diffraction patterns for holo-video is neither efficient nor necessary if they can be generated at the receiver. While our desire to do so led us to develop specialized and often architecturally unusual computational systems, [7] more recently we have instead concentrated on tracking developments in GPU hardware and corresponding software pipelines and trying to achieve sufficient performance and quality from off-the-shelf infrastructure. Thus our target is a holo-video monitor that can plug into a standard PC or game machine using standard electrical interfaces and yet provide holographic images. Fig. 1. Viewing a stereo pair on a holo-video display: (Top) showing each parallax view at a single angle requires the viewer to be at the center of the screen and at a specific distance. (Center) showing each view at a range of angles loosens the distance restriction. (Bottom) repeating the view pairs supports multiple horizontal positions or multiple viewers; gaps prevent pseudoscopic images. 3. MORE SPECIFICS ABOUT OUR RESEARCH The Mark III display [8] builds on lessons from previous generations of MIT Media Laboratory holo-video displays. Like them it is horizontal-parallax-only and based on the Scophony electro-optical architecture, however it uses a different sort of light modulator (a very-high-bandwidth 2-axis lithium niobate guided-wave device), and a simpler and cheaper optical architecture that eliminates the horizontal scanning mirrors and some other components, and permits the optical path to be folded to fit into a CRT-monitor-like box. The initial version of this display is for proof-of-concept purposes and has a small monochrome screen whose specifications are 440 scan lines, 30 Hz

5 24 view angle 80 mm 60 mm 80 mm (W H D) view volume Some of these numerical specs are limited by our requirement that this display be drivable by a single standard dualhead GPU over its monitor interfaces. Likewise the HPO nature of the display is a consequence of the desire to compute the diffraction patterns on-the-fly and also the frame buffer size limitations of the GPU. Reference [9] provides a detailed description of our rendering method and how we map it onto the GPU hardware and software pipelines, but we shall summarize the ideas here. It is currently impractical to compute dynamic holographic imagery for scenes of any significant visual complexity using physically-based algorithms such as interferometric point-cloud methods. Instead we aim to construct lightfields by a diffraction-specific method, superposing precomputed basis diffraction fringes modulated by views of the scene rendered from a 3-D model. This approach handles complex surface textures, occlusions, and transparent objects correctly, and (more relevant to the focus of this paper) provides a point in the process into which stereoscopic real-world images captured by multiple cameras can be placed (instead of images rendered from 3-D models). The large amounts of precomputation and table lookup make this an inherently fast approach, but we have put significant effort over the past five years into mapping the algorithmic steps onto the operations that GPUs do well. Current GPUs are approaching teraflop performance, but can deliver that amount of computation only if the algorithm can be cast into a form that makes use of the fact that most GPUs are vector processors for four-component (nominally RGBA) vectors, and have certain functions that are optimized for certain vector data types. It is also essential to consider how to maximize parallelism and to keep all parts of the hardware pipeline busy as much as possible. Formerly the hardware pipeline and the corresponding software interface involved a largely fixed set of functions connected in a fixed way. GPU hardware and software now use programmable shaders whose mathematical transformations or texture and lighting calculations can be changed. As more programmers use GPUs to handle a variety of stream calculations even ones that have no connection to graphics this flexibility should continue to increase. Our method of diffraction-specific computation treats the holographic stereogram as a summation of overlapping amplitude modulated chirped gratings. This superposition of gratings on the hologram plane produces a set of viewdirectional emitters on an emitter plane. Each chirp focuses light to create a point emitter, while the angle-dependent brightnesses of the views are encoded in the amplitude modulation (Fig. 2). We perform three main steps: 1) precomputing the basis fringe chirp vectors into a chirp texture, 2) multi-view rendering of the scene directly into the modulation vectors stored in the modulation texture using a double frustum camera, and then 3) assembling the final hologram by gathering chirp and modulation vectors via texture fetches and then performing a dot product. Each basis chirp vector is independent of the particular scene being displayed but depends on its horizontal position in the hololine. Thus these can be computed once for a given display and stored as a 1-D texture to be used as a lookup table. Then the brightness of a point emitter from a given view direction is set by modulating some portion of the chirp making that emitter: the view ray is projected back through the emitter to the hologram plane (behind the emitter plane) and then the part of a chirp that contributes to that emitter s view from that direction can be found and multiplied by the modulation value (which is calculated by rendering the scene from a large number of viewpoints). We capture the scene with a virtual camera with its centers of projection at the positions of the emitters; it uses a double frustum to capture pixels in front of and behind the emitter plane. The novel projection matrix for this camera is handled by the vertex shader of the GPU. A common technique in video games to achieve high visual quality at reduced computational cost is to use a low-polygon-count model with higher-complexity texture and surface-normal maps. We follow this trend as well and use the fragment shader of the GPU to handle lighting, normal mapping, and texture mapping. Earlier this year we reported rendering performance results for the Mark III holo-video display with an NVIDIA Quadro 4500FX GPU (circa 2006). A 336 pixel x 440 line x 96 views hologram runs at 10 fps for a texture mapped model with 500 polygons with per-pixel normal mapped lighting (equivalent to a polygon model). If the modulation texture is prerendered, or the multi-view rendering and modulation vector assembly is performed once and the texture reused, the hologram is generated at 15 fps. While this is not up to the 30 fps refresh rate of the screen it is sufficient to provide

6 dynamic display capabilities for entertainment or graphical-user-interface purposes. We have also applied this rendering method to our old Mark II display (Fig. 3). As of this writing we are undertaking tests using newer GPU hardware and expect to release better performance figures shortly. We are also working on relaxing the restriction that emitters must lie on a plane, making our renderer more of a hybrid between a holo-stereogram and a physically-based model. Fig. 2. Diffraction specific holographic stereogram: parallax views modulate chirps and sum to form hologram made up of viewdependent point emitters. Views are captured from emitter plane using a double frustum geometry. As a comparision with our earlier work with our Mark II display and three circa-2004 GPUs, taking into account the differing polygon count, frame rate, and number of GPUs, we have now increased computational performance by roughly a factor of This increase is not simply from GPU architectural improvements but also from increased optimization of our code.

7 Fig. 3. Still image from an animated sequence (on our old Mark II display) of a texture-mapped dinosaur running past a grove of trees. 4. CONCLUSIONS AND OBSERVATIONS The era of interactive consumer holographic displays has nearly arrived. We have developed a rendering algorithm for commodity hardware which can render holographic stereograms of useable dimensions and large view numbers from modestly-sized texture-mapped models at interactive frame rates. Our use of standard 3-D graphics APIs makes our system compatible with much existing graphics content and software, and our pipeline has compatibility with real-world stereographic TV imagery as well as synthetic graphics. While we have demonstrated interactive-rate holo-video rendering at nearly SDTV resolution using a standard GPU, a collection of factors will continue to throttle the scale-up of our work. Recall that in a holo-video display (unlike a normal video display) the physical size of the pixels ideally much less than one micron is set by the physics of diffraction rather than by perceptual considerations. As a consequence, the pixel count increases with the display size (and with the square of the display size in full-parallax displays). We already need hololines that are significantly longer than the maximum line length supported by GPUs, and thus our display has to treat several scan lines from the GPU as a single hololine; a much larger screen would exceed the total pixel count per frame supported by current or planned GPUs of which we re aware. So even though computational speed of a GPU might grow to permit computing larger holo-video images in real time, there may not be enough framebuffer memory to hold the larger image. And should the framebuffer size increase, the electrical interconnect may still limit the ability to get the pixels out of the GPU and into the display (we re currently using all six channel outputs of a dual-head GPU just to achieve enough bandwidth to get a monochrome hologram into Mark III). MIT s first generation holo-video display, the Mark I, in one configuration demonstrated full-color holo-video. Now that compact solid-state RGB laser sources have been developed for consumer video projection applications, converting the Mark III apparatus to full-color would be relatively simple. Modifying our rendering algorithm to produce color holograms for Mark III is likewise straightforward, though different basis chirps will have to be used for red, green, and blue light so that each diffracts through the same angle. Perhaps the easiest way to handle color would be linesequential multiplexing, where the first hololine would be the red channel, the second hololine the green channel, and the third hololine the blue channel. The color-alternation rate would be high enough to avoid the rainbow effect that frame-sequential color displays exhibit for moving objects (or moving eyes). There is less likelihood of near-term availability of real-world imagery captured by 2-D camera arrays than of stereoscopic video, but given a high-enough-resolution 2-D light modulator and sufficient GPU framebuffer size and output

8 bandwidth, our rendering method is applicable to full-parallax displays by using 2-D zone plate basis functions instead of 1-D chirps, rendering the scene using a double frustum camera from a 2-D array of emitter positions rather than from a 1-D line, and summing dot products for zone plate vectors and modulation vectors. While we search for solutions to the scale-up problems above that still fit within the consumer-electronics space, and while we refine the image brightness and sharpness of the Mark III system and improve the performance of the rendering pipeline, there remain other tasks to perform. Chief among these are understanding the perceptual and artistic properties of dynamic 3-D content on our display, and evaluating the performance and image quality of rendering stereoscopic real-world imagery from various sources. 5. ACKNOWLEDGMENTS The authors gratefully acknowledge the late Steve Benton and the many alumni of the Spatial Imaging Group who started the holo-video project at the MIT Media Lab that led to our current research. This work has been supported by the Digital Life, Things That Think, and CELab consortia and the Center for Future Storytelling at the MIT Media Laboratory. Thanks also to NVIDIA for the donation of graphics hardware used in this research. REFERENCES 1. H. R. Johnston, C, A Hermanson, and H. L. Hull, Stereo-Television in Remote Control, Electrical Engineering, 69, pp , (1950). 2. A. Lit, The Magnitude of the Pulfrich Stereo-Phenomenon as a Function of Binocular Differences of Intensity at Various Levels of Illumination, Am. J. Psychol., 62, pp , (1949). 3. T. Okoshi, Three-Dimensional Imaging Techniques, Academic Press, New York, K.-T. Kim, M. W. Siegel, and J.-Y. Son, Synthesis of a High Resolution 3D-Stereoscopic Image from a High Resolution Monoscopic Image and a Low Resolution Depth Map, Proc. SPIE Stereoscopic Displays and Virtual Reality Systems V, 3295A, pp , (1998). 5. M.-S. Kim, G. Baasantseren, N. Kim, J.-H. Park, M.-Y. Shin, and K.-H. Yoo, Fourier Hologram Generation of 3D Objects Using Multiple Orthographic View Images Captured by Lens Array, Proc. SPIE Practical Holography XXIII, 7233, p , (2009). 6. B. Mendiburu, 3D Moviemaking: Stereoscopic Digital Cinema from Script to Screen, Focal Press, Burlington MA, USA, (2009). 7. J. A. Watlington, M. Lucente, C. J. Sparrell, V. M. Bove, Jr., and I. Tamitani, A Hardware Architecture for Rapid Generation of Electro-Holographic Fringe Patterns, Proc. SPIE Practical Holography IX, 2406, pp , (1995). 8. D. Smalley, Q. Smithwick, and V. M. Bove, Jr., Holographic Video Display Based on Guided-Wave Acousto- Optic Devices, Proc. SPIE Practical Holography XXI, 6488, p L, (2007). 9. Q. Y. J. Smithwick, J. Barabas, D. E. Smalley, and V. M. Bove, Jr., Real-Time Shader Rendering of Holographic Stereograms, Proc. SPIE Practical Holography XXIII, 7233, p , (2009).

Devices displaying 3D image. RNDr. Róbert Bohdal, PhD.

Devices displaying 3D image. RNDr. Róbert Bohdal, PhD. Devices displaying 3D image RNDr. Róbert Bohdal, PhD. 1 Types of devices displaying 3D image Stereoscopic Re-imaging Volumetric Autostereoscopic Holograms mounted displays, optical head-worn displays Pseudo

More information

Real-time shader rendering of holographic stereograms

Real-time shader rendering of holographic stereograms Real-time shader rendering of holographic stereograms The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Mahdi Amiri. May Sharif University of Technology

Mahdi Amiri. May Sharif University of Technology Course Presentation Multimedia Systems 3D Technologies Mahdi Amiri May 2014 Sharif University of Technology Binocular Vision (Two Eyes) Advantages A spare eye in case one is damaged. A wider field of view

More information

Video Communication Ecosystems. Research Challenges for Immersive. over Future Internet. Converged Networks & Services (CONES) Research Group

Video Communication Ecosystems. Research Challenges for Immersive. over Future Internet. Converged Networks & Services (CONES) Research Group Research Challenges for Immersive Video Communication Ecosystems over Future Internet Tasos Dagiuklas, Ph.D., SMIEEE Assistant Professor Converged Networks & Services (CONES) Research Group Hellenic Open

More information

Natural Viewing 3D Display

Natural Viewing 3D Display We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,

More information

Development of 3-D Medical Image Visualization System

Development of 3-D Medical Image Visualization System ABSTRACT Development of 3-D Medical Image Visualization System Thomas A. Nwodoh (Email: nwodoh@digitechcorp.com) Department of Electronic Engineering University of Nigeria, Nsukka. This paper reports the

More information

COMS W4172 Perception, Displays, and Devices 3

COMS W4172 Perception, Displays, and Devices 3 COMS W4172 Perception, Displays, and Devices 3 Steven Feiner Department of Computer Science Columbia University New York, NY 10027 www.cs.columbia.edu/graphics/courses/csw4172 February 20, 2018 1 What

More information

Technologies of Digital Holographic Display

Technologies of Digital Holographic Display Technologies of Digital Holographic Display Joonku Hahn Kyungpook National University Outline: 1. Classification of digital holographic display 2. Data capacity, View volume and Resolution 3. Holographic

More information

Realtime 3D Computer Graphics Virtual Reality

Realtime 3D Computer Graphics Virtual Reality Realtime 3D Computer Graphics Virtual Reality Human Visual Perception The human visual system 2 eyes Optic nerve: 1.5 million fibers per eye (each fiber is the axon from a neuron) 125 million rods (achromatic

More information

Prof. Feng Liu. Spring /27/2014

Prof. Feng Liu. Spring /27/2014 Prof. Feng Liu Spring 2014 http://www.cs.pdx.edu/~fliu/courses/cs510/ 05/27/2014 Last Time Video Stabilization 2 Today Stereoscopic 3D Human depth perception 3D displays 3 Stereoscopic media Digital Visual

More information

An Introduction to 3D Computer Graphics, Stereoscopic Image, and Animation in OpenGL and C/C++ Fore June

An Introduction to 3D Computer Graphics, Stereoscopic Image, and Animation in OpenGL and C/C++ Fore June An Introduction to 3D Computer Graphics, Stereoscopic Image, and Animation in OpenGL and C/C++ Fore June Chapter 15 Stereoscopic Displays In chapters 8 through 10, we have discussed the principles and

More information

Multimedia Technology CHAPTER 4. Video and Animation

Multimedia Technology CHAPTER 4. Video and Animation CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

More information

The topics are listed below not exactly in the same order as they were presented in class but all relevant topics are on the list!

The topics are listed below not exactly in the same order as they were presented in class but all relevant topics are on the list! Ph332, Fall 2016 Study guide for the final exam, Part Two: (material lectured before the Nov. 3 midterm test, but not used in that test, and the material lectured after the Nov. 3 midterm test.) The final

More information

CSE 165: 3D User Interaction. Lecture #3: Displays

CSE 165: 3D User Interaction. Lecture #3: Displays CSE 165: 3D User Interaction Lecture #3: Displays CSE 165 -Winter 2016 2 Announcements Homework Assignment #1 Due Friday at 2:00pm To be presented in CSE lab 220 Paper presentations Title/date due by entering

More information

Computer generated holography using parallel commodity graphics hardware

Computer generated holography using parallel commodity graphics hardware Computer generated holography using parallel commodity graphics hardware Lukas Ahrenberg Max-Planck-Institut für Informatik, Saarbrücken, Germany ahrenberg@mpi-inf.mpg.de Philip Benzie Communications and

More information

CS 563 Advanced Topics in Computer Graphics Stereoscopy. by Sam Song

CS 563 Advanced Topics in Computer Graphics Stereoscopy. by Sam Song CS 563 Advanced Topics in Computer Graphics Stereoscopy by Sam Song Stereoscopy Introduction Parallax Camera Displaying and Viewing Results Stereoscopy What is it? seeing in three dimensions creates the

More information

Dynamic Light Sculpting: Creating True 3D Holograms With GPUs

Dynamic Light Sculpting: Creating True 3D Holograms With GPUs Dynamic Light Sculpting: Creating True 3D Holograms With GPUs TM Official partner Key innovator in a volumetric sector worth 2bn according to MPEG committee on Immersive Media contributor From Augmented

More information

Shading of a computer-generated hologram by zone plate modulation

Shading of a computer-generated hologram by zone plate modulation Shading of a computer-generated hologram by zone plate modulation Takayuki Kurihara * and Yasuhiro Takaki Institute of Engineering, Tokyo University of Agriculture and Technology, 2-24-16 Naka-cho, Koganei,Tokyo

More information

Three dimensional Binocular Holographic Display Using Liquid Crystal Shutter

Three dimensional Binocular Holographic Display Using Liquid Crystal Shutter Journal of the Optical Society of Korea Vol. 15, No. 4, December 211, pp. 345-351 DOI: http://dx.doi.org/1.387/josk.211.15.4.345 Three dimensional Binocular Holographic Display Using iquid Crystal Shutter

More information

Design and visualization of synthetic holograms for security applications

Design and visualization of synthetic holograms for security applications Journal of Physics: Conference Series Design and visualization of synthetic holograms for security applications To cite this article: M Škere et al 2013 J. Phys.: Conf. Ser. 415 012060 Related content

More information

Volumetric Hyper Reality: A Computer Graphics Holy Grail for the 21st Century? Gavin Miller Apple Computer, Inc.

Volumetric Hyper Reality: A Computer Graphics Holy Grail for the 21st Century? Gavin Miller Apple Computer, Inc. Volumetric Hyper Reality: A Computer Graphics Holy Grail for the 21st Century? Gavin Miller Apple Computer, Inc. Structure of this Talk What makes a good holy grail? Review of photo-realism Limitations

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

Reprint. from the Journal. of the SID

Reprint. from the Journal. of the SID A 23-in. full-panel-resolution autostereoscopic LCD with a novel directional backlight system Akinori Hayashi (SID Member) Tomohiro Kometani Akira Sakai (SID Member) Hiroshi Ito Abstract An autostereoscopic

More information

CSE 165: 3D User Interaction

CSE 165: 3D User Interaction CSE 165: 3D User Interaction Lecture #4: Displays Instructor: Jurgen Schulze, Ph.D. CSE 165 - Winter 2015 2 Announcements Homework Assignment #1 Due tomorrow at 1pm To be presented in CSE lab 220 Homework

More information

Real-time Integral Photography Holographic Pyramid using a Game Engine

Real-time Integral Photography Holographic Pyramid using a Game Engine Real-time Integral Photography Holographic Pyramid using a Game Engine Shohei Anraku, Toshiaki Yamanouchi and Kazuhisa Yanaka Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa-ken,

More information

Extended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses

Extended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses Proceedings of the 2 nd World Congress on Electrical Engineering and Computer Systems and Science (EECSS'16) Budapest, Hungary August 16 17, 2016 Paper No. MHCI 112 DOI: 10.11159/mhci16.112 Extended Fractional

More information

Graphics Hardware and Display Devices

Graphics Hardware and Display Devices Graphics Hardware and Display Devices CSE328 Lectures Graphics/Visualization Hardware Many graphics/visualization algorithms can be implemented efficiently and inexpensively in hardware Facilitates interactive

More information

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world

More information

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo Lecture 08: Introduction to Stereo Reading: T&V Section 7.1 Stereo Vision Inferring depth from images taken at the same time by two or more cameras. Basic Perspective Projection Scene Point Perspective

More information

Basic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues

Basic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues Distance, depth, and 3D shape cues Pictorial depth cues: familiar size, relative size, brightness, occlusion, shading and shadows, aerial/ atmospheric perspective, linear perspective, height within image,

More information

The Graphics Pipeline and OpenGL IV: Stereo Rendering, Depth of Field Rendering, Multi-pass Rendering!

The Graphics Pipeline and OpenGL IV: Stereo Rendering, Depth of Field Rendering, Multi-pass Rendering! ! The Graphics Pipeline and OpenGL IV: Stereo Rendering, Depth of Field Rendering, Multi-pass Rendering! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 6! stanford.edu/class/ee267/!!

More information

New Approaches To Holographic Video

New Approaches To Holographic Video Published in SPIE Proceeding #1732 Holographics International 92 (SPIE, Bellingham, WA, July 1992) paper #1732-48. New Approaches To Holographic Video Mark Lucente Pierre St. Hilaire Stephen A. Benton

More information

EBU TECHNOLOGY AND DEVELOPMENT. The EBU and 3D. Dr Hans Hoffmann. Dr David Wood. Deputy Director. Programme Manager

EBU TECHNOLOGY AND DEVELOPMENT. The EBU and 3D. Dr Hans Hoffmann. Dr David Wood. Deputy Director. Programme Manager EBU TECHNOLOGY AND DEVELOPMENT The EBU and 3D - What are we doing - Dr David Wood Deputy Director Dr Hans Hoffmann Programme Manager Is it the beer or the 3D that s giving me a headache? It is very easy

More information

zspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0

zspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0 zspace Developer SDK Guide - Introduction Version 1.0 zspace.com Developer s Guide Rev 1.0 zspace, Inc. 2015. zspace is a registered trademark of zspace, Inc. All other trademarks are the property of their

More information

Jamison R. Daniel, Benjamın Hernandez, C.E. Thomas Jr, Steve L. Kelley, Paul G. Jones, Chris Chinnock

Jamison R. Daniel, Benjamın Hernandez, C.E. Thomas Jr, Steve L. Kelley, Paul G. Jones, Chris Chinnock Jamison R. Daniel, Benjamın Hernandez, C.E. Thomas Jr, Steve L. Kelley, Paul G. Jones, Chris Chinnock Third Dimension Technologies Stereo Displays & Applications January 29, 2018 Electronic Imaging 2018

More information

PHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

PHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 33 Lecture RANDALL D. KNIGHT Chapter 33 Wave Optics IN THIS CHAPTER, you will learn about and apply the wave model of light. Slide

More information

Invited Paper. Nukui-Kitamachi, Koganei, Tokyo, , Japan ABSTRACT 1. INTRODUCTION

Invited Paper. Nukui-Kitamachi, Koganei, Tokyo, , Japan ABSTRACT 1. INTRODUCTION Invited Paper Wavefront printing technique with overlapping approach toward high definition holographic image reconstruction K. Wakunami* a, R. Oi a, T. Senoh a, H. Sasaki a, Y. Ichihashi a, K. Yamamoto

More information

Module 7 VIDEO CODING AND MOTION ESTIMATION

Module 7 VIDEO CODING AND MOTION ESTIMATION Module 7 VIDEO CODING AND MOTION ESTIMATION Lesson 20 Basic Building Blocks & Temporal Redundancy Instructional Objectives At the end of this lesson, the students should be able to: 1. Name at least five

More information

PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS

PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS M.R Swash, A. Aggoun, O. Abdulfatah, B. Li, J. C. Fernández, E. Alazawi and E. Tsekleves School of Engineering and Design, Brunel

More information

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS Television services in Europe currently broadcast video at a frame rate of 25 Hz. Each frame consists of two interlaced fields, giving a field rate of 50

More information

Computer Graphics Lecture 2

Computer Graphics Lecture 2 1 / 16 Computer Graphics Lecture 2 Dr. Marc Eduard Frîncu West University of Timisoara Feb 28th 2012 2 / 16 Outline 1 Graphics System Graphics Devices Frame Buffer 2 Rendering pipeline 3 Logical Devices

More information

Topics and things to know about them:

Topics and things to know about them: Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will

More information

Stereoscopic Presentations Taking the Difficulty out of 3D

Stereoscopic Presentations Taking the Difficulty out of 3D Stereoscopic Presentations Taking the Difficulty out of 3D Andrew Woods, Centre for Marine Science & Technology, Curtin University, GPO Box U1987, Perth 6845, AUSTRALIA Email: A.Woods@cmst.curtin.edu.au

More information

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural 1 Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural to consider using it in video games too. 2 I hope that

More information

Automatic 2D-to-3D Video Conversion Techniques for 3DTV

Automatic 2D-to-3D Video Conversion Techniques for 3DTV Automatic 2D-to-3D Video Conversion Techniques for 3DTV Dr. Lai-Man Po Email: eelmpo@cityu.edu.hk Department of Electronic Engineering City University of Hong Kong Date: 13 April 2010 Content Why 2D-to-3D

More information

Victor S. Grinberg M. W. Siegel. Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA, 15213

Victor S. Grinberg M. W. Siegel. Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA, 15213 Geometry of binocular imaging III : Wide-Angle and Fish-Eye Lenses Victor S. Grinberg M. W. Siegel Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh,

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,800 116,000 120M Open access books available International authors and editors Downloads Our

More information

ILLUMICONCLAVE I. Description: Meeting of experts convened to rule on topics related to advanced display.

ILLUMICONCLAVE I. Description: Meeting of experts convened to rule on topics related to advanced display. ILLUMICONCLAVE I Description: Meeting of experts convened to rule on topics related to advanced display. Location: Heidelberg, Germany 2016 Article I DEFINITIONS Ambiguous terms in display technology were

More information

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render 1 There are two major classes of algorithms for extracting most kinds of lines from 3D meshes. First, there are image-space algorithms that render something (such as a depth map or cosine-shaded model),

More information

Multiview Generation for 3D Digital Signage

Multiview Generation for 3D Digital Signage Multiview Generation for 3D Digital Signage 3D Content for Displays without Glasses based on Standard Stereo Content 3D brings new perspectives to digital signage and other public displays that don t use

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information

lecture 10 - depth from blur, binocular stereo

lecture 10 - depth from blur, binocular stereo This lecture carries forward some of the topics from early in the course, namely defocus blur and binocular disparity. The main emphasis here will be on the information these cues carry about depth, rather

More information

Practical Shadow Mapping

Practical Shadow Mapping Practical Shadow Mapping Stefan Brabec Thomas Annen Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany Abstract In this paper we propose several methods that can greatly improve

More information

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015 Enhancing Traditional Rasterization Graphics with Ray Tracing October 2015 James Rumble Developer Technology Engineer, PowerVR Graphics Overview Ray Tracing Fundamentals PowerVR Ray Tracing Pipeline Using

More information

Lecture 14, Video Coding Stereo Video Coding

Lecture 14, Video Coding Stereo Video Coding Lecture 14, Video Coding Stereo Video Coding A further application of the tools we saw (particularly the motion compensation and prediction) is stereo video coding. Stereo video is used for creating a

More information

The Video Z-buffer: A Concept for Facilitating Monoscopic Image Compression by exploiting the 3-D Stereoscopic Depth map

The Video Z-buffer: A Concept for Facilitating Monoscopic Image Compression by exploiting the 3-D Stereoscopic Depth map The Video Z-buffer: A Concept for Facilitating Monoscopic Image Compression by exploiting the 3-D Stereoscopic Depth map Sriram Sethuraman 1 and M. W. Siegel 2 1 David Sarnoff Research Center, Princeton,

More information

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment Dominic Filion, Senior Engineer Blizzard Entertainment Rob McNaughton, Lead Technical Artist Blizzard Entertainment Screen-space techniques Deferred rendering Screen-space ambient occlusion Depth of Field

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Rasterization Overview

Rasterization Overview Rendering Overview The process of generating an image given a virtual camera objects light sources Various techniques rasterization (topic of this course) raytracing (topic of the course Advanced Computer

More information

Computer Graphics. Chapter 1 (Related to Introduction to Computer Graphics Using Java 2D and 3D)

Computer Graphics. Chapter 1 (Related to Introduction to Computer Graphics Using Java 2D and 3D) Computer Graphics Chapter 1 (Related to Introduction to Computer Graphics Using Java 2D and 3D) Introduction Applications of Computer Graphics: 1) Display of Information 2) Design 3) Simulation 4) User

More information

Why should I follow this presentation? What is it good for?

Why should I follow this presentation? What is it good for? Why should I follow this presentation? What is it good for? Introduction into 3D imaging (stereoscopy, stereoscopic, stereo vision) S3D state of the art for PC and TV Compiling 2D to 3D Computing S3D animations,

More information

MEASUREMENT OF PERCEIVED SPATIAL RESOLUTION IN 3D LIGHT-FIELD DISPLAYS

MEASUREMENT OF PERCEIVED SPATIAL RESOLUTION IN 3D LIGHT-FIELD DISPLAYS MEASUREMENT OF PERCEIVED SPATIAL RESOLUTION IN 3D LIGHT-FIELD DISPLAYS Péter Tamás Kovács 1, 2, Kristóf Lackner 1, 2, Attila Barsi 1, Ákos Balázs 1, Atanas Boev 2, Robert Bregović 2, Atanas Gotchev 2 1

More information

Video Compression An Introduction

Video Compression An Introduction Video Compression An Introduction The increasing demand to incorporate video data into telecommunications services, the corporate environment, the entertainment industry, and even at home has made digital

More information

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye Ray Tracing What was the rendering equation? Motivate & list the terms. Relate the rendering equation to forward ray tracing. Why is forward ray tracing not good for image formation? What is the difference

More information

x ~ Hemispheric Lighting

x ~ Hemispheric Lighting Irradiance and Incoming Radiance Imagine a sensor which is a small, flat plane centered at a point ~ x in space and oriented so that its normal points in the direction n. This sensor can compute the total

More information

Depth Estimation for View Synthesis in Multiview Video Coding

Depth Estimation for View Synthesis in Multiview Video Coding MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Depth Estimation for View Synthesis in Multiview Video Coding Serdar Ince, Emin Martinian, Sehoon Yea, Anthony Vetro TR2007-025 June 2007 Abstract

More information

A Fast Image Multiplexing Method Robust to Viewer s Position and Lens Misalignment in Lenticular 3D Displays

A Fast Image Multiplexing Method Robust to Viewer s Position and Lens Misalignment in Lenticular 3D Displays A Fast Image Multiplexing Method Robust to Viewer s Position and Lens Misalignment in Lenticular D Displays Yun-Gu Lee and Jong Beom Ra Department of Electrical Engineering and Computer Science Korea Advanced

More information

Mobile 3D Display Technology to Realize Natural 3D Images

Mobile 3D Display Technology to Realize Natural 3D Images 3D Display 3D Image Mobile Device Special Articles on User Interface Research New Interface Design of Mobile Phones 1. Introduction Nowadays, as a new method of cinematic expression continuing from the

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

Models and Architectures

Models and Architectures Models and Architectures Objectives Learn the basic design of a graphics system Introduce graphics pipeline architecture Examine software components for an interactive graphics system 1 Image Formation

More information

Stereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo

Stereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo Stereo Viewing Stereo Stereograms Autostereograms Depth from Stereo 3D (Depth) from 2D 3D information is lost by projection. How do we recover 3D information? Image 3D Model Depth Cues Shadows: Occlusions:

More information

Zero Order Correction of Shift-multiplexed Computer Generated Fourier Holograms Recorded in Incoherent Projection Scheme

Zero Order Correction of Shift-multiplexed Computer Generated Fourier Holograms Recorded in Incoherent Projection Scheme VII International Conference on Photonics and Information Optics Volume 2018 Conference Paper Zero Order Correction of Shift-multiplexed Computer Generated Fourier Holograms Recorded in Incoherent Projection

More information

Here s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and

Here s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and 1 Here s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and the light. 2 To visualize this problem, consider the

More information

Augmenting Reality with Projected Interactive Displays

Augmenting Reality with Projected Interactive Displays Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection

More information

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2

More information

Ravikanth Pappu Carlton Sparrell* John Underkoffler Adam Kropp Benjie Chen Wendy Plesniak. {pappu, carltonj, jh, akropp, benjie,

Ravikanth Pappu Carlton Sparrell* John Underkoffler Adam Kropp Benjie Chen Wendy Plesniak. {pappu, carltonj, jh, akropp, benjie, A GENERALIZED PIPELINE FOR PREVIEW AND RENDERING OF SYNTHETIC HOLOGRAMS Ravikanth Pappu Carlton Sparrell* John Underkoffler Adam Kropp Benjie Chen Wendy Plesniak {pappu, carltonj, jh, akropp, benjie, wjp}@media.mit.edu

More information

COMP environment mapping Mar. 12, r = 2n(n v) v

COMP environment mapping Mar. 12, r = 2n(n v) v Rendering mirror surfaces The next texture mapping method assumes we have a mirror surface, or at least a reflectance function that contains a mirror component. Examples might be a car window or hood,

More information

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker Rendering Algorithms: Real-time indirect illumination Spring 2010 Matthias Zwicker Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant

More information

Double buffering technique for binocular imaging in a window. Carnegie Mellon University, Pittsburgh, PA ABSTRACT

Double buffering technique for binocular imaging in a window. Carnegie Mellon University, Pittsburgh, PA ABSTRACT Double buffering technique for binocular imaging in a window Jeffrey S. McVeigh 1, Victor S. Grinberg 2 and M. W. Siegel 2 1 Department of Electrical and Computer Engineering 2 Robotics Institute, School

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload)

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload) Lecture 2: Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload) Visual Computing Systems Today Finishing up from last time Brief discussion of graphics workload metrics

More information

Holographic Method for Extracting Three-Dimensional Information with a CCD Camera. Synopsis

Holographic Method for Extracting Three-Dimensional Information with a CCD Camera. Synopsis Mem. Fac. Eng., Osaka City Univ., Vol. 36,pp. 1-11.(1995) Holographic Method for Extracting Three-Dimensional Information with a CCD Camera by Hideki OKAMOTO*, Hiroaki DEDA*, Hideya TAKAHASHI**, and Eiji

More information

NOT FOR DISTRIBUTION OR REPRODUCTION

NOT FOR DISTRIBUTION OR REPRODUCTION www.pipelinepub.com Volume 10, Issue 11 Next-Generation Video Transcoding By Alexandru Voica The Emergence of H.265 (HEVC) and 10- Bit Color Formats Today s increasingly demanding applications, such as

More information

Next-Generation 3D Formats with Depth Map Support

Next-Generation 3D Formats with Depth Map Support MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Next-Generation 3D Formats with Depth Map Support Chen, Y.; Vetro, A. TR2014-016 April 2014 Abstract This article reviews the most recent extensions

More information

Compression Issues in Multiview Autostereo Displays

Compression Issues in Multiview Autostereo Displays Compression Issues in Multiview Autostereo Displays Druti Shah and Neil A. Dodgson Computer Lab, University of Cambridge, New Museum Site, Pembroke St, Cambridge CB2 3QG, UK. ABSTRACT Image compression

More information

Homework 3: Programmable Shaders

Homework 3: Programmable Shaders Homework 3: Programmable Shaders Introduction to Computer Graphics and Imaging (Summer 2012), Stanford University Due Monday, July 23, 11:59pm Warning: The coding portion of this homework involves features

More information

3D Autostereoscopic Display Image Generation Framework using Direct Light Field Rendering

3D Autostereoscopic Display Image Generation Framework using Direct Light Field Rendering 3D Autostereoscopic Display Image Generation Framework using Direct Light Field Rendering Young Ju Jeong, Yang Ho Cho, Hyoseok Hwang, Hyun Sung Chang, Dongkyung Nam, and C. -C Jay Kuo; Samsung Advanced

More information

CS451Real-time Rendering Pipeline

CS451Real-time Rendering Pipeline 1 CS451Real-time Rendering Pipeline JYH-MING LIEN DEPARTMENT OF COMPUTER SCIENCE GEORGE MASON UNIVERSITY Based on Tomas Akenine-Möller s lecture note You say that you render a 3D 2 scene, but what does

More information

IP Video Network Gateway Solutions

IP Video Network Gateway Solutions IP Video Network Gateway Solutions INTRODUCTION The broadcast systems of today exist in two separate and largely disconnected worlds: a network-based world where audio/video information is stored and passed

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

Computational Photography

Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2010 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application Introduction Pinhole camera

More information

Dynamic Ambient Occlusion and Indirect Lighting. Michael Bunnell NVIDIA Corporation

Dynamic Ambient Occlusion and Indirect Lighting. Michael Bunnell NVIDIA Corporation Dynamic Ambient Occlusion and Indirect Lighting Michael Bunnell NVIDIA Corporation Environment Lighting Environment Map + Ambient Occlusion + Indirect Lighting New Radiance Transfer Algorithm Useful for

More information

Intermediate view synthesis considering occluded and ambiguously referenced image regions 1. Carnegie Mellon University, Pittsburgh, PA 15213

Intermediate view synthesis considering occluded and ambiguously referenced image regions 1. Carnegie Mellon University, Pittsburgh, PA 15213 1 Intermediate view synthesis considering occluded and ambiguously referenced image regions 1 Jeffrey S. McVeigh *, M. W. Siegel ** and Angel G. Jordan * * Department of Electrical and Computer Engineering

More information

View Synthesis for Multiview Video Compression

View Synthesis for Multiview Video Compression View Synthesis for Multiview Video Compression Emin Martinian, Alexander Behrens, Jun Xin, and Anthony Vetro email:{martinian,jxin,avetro}@merl.com, behrens@tnt.uni-hannover.de Mitsubishi Electric Research

More information

HoloGraphics. Combining Holograms with Interactive Computer Graphics

HoloGraphics. Combining Holograms with Interactive Computer Graphics HoloGraphics Combining Holograms with Interactive Computer Graphics Gordon Wetzstein Bauhaus University Weimar [gordon.wetzstein@medien.uni-weimar.de] 1 Location Weimar Dunedin Courtesy: NASA 2 HoloGraphics

More information

Digital holographic display with two-dimensional and threedimensional convertible feature by high speed switchable diffuser

Digital holographic display with two-dimensional and threedimensional convertible feature by high speed switchable diffuser https://doi.org/10.2352/issn.2470-1173.2017.5.sd&a-366 2017, Society for Imaging Science and Technology Digital holographic display with two-dimensional and threedimensional convertible feature by high

More information

Reduced Dual-Mode Mobile 3D Display Using Crosstalk

Reduced Dual-Mode Mobile 3D Display Using Crosstalk Reduced Dual-Mode Mobile 3D Display Using Crosstalk Priti Khaire, Student (BE), Computer Science and Engineering Department, Shri Sant Gadge Baba College of Engineering and Technology, Bhusawal, North

More information

Head Mounted Display for Mixed Reality using Holographic Optical Elements

Head Mounted Display for Mixed Reality using Holographic Optical Elements Mem. Fac. Eng., Osaka City Univ., Vol. 40, pp. 1-6 (1999) Head Mounted Display for Mixed Reality using Holographic Optical Elements Takahisa ANDO*, Toshiaki MATSUMOTO**, Hideya Takahashi*** and Eiji SHIMIZU****

More information

A Qualitative Analysis of 3D Display Technology

A Qualitative Analysis of 3D Display Technology A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract

More information