Janne Taponen Screen Specic Stereoscopic 3D Content
|
|
- Evan Norton
- 5 years ago
- Views:
Transcription
1 Janne Taponen Screen Specic Stereoscopic 3D Content Bachelor of Science Thesis Examiner: Konsta Koppinen
2 II ABSTRACT Tampere University Of Technology Bachelor's Degree Programme in Computer Science Author: Janne Taponen Bachelor of Science Thesis, 22 pages, 2 Appendix pages December 2010 Major: Signal Processing and Multimedia Examiner: Konsta Koppinen Keywords: Stereoscopic Video, 3D Video, Screen specic content This thesis studies the eects of scaling stereoscopic 3D content to t dierent displays and how to maintain a proper depth perception. If stereoscopic 3D content is scaled to t a dierent display device the content will appear to be skewed along the z-axis due to the non-linear nature of interocular distance. For this thesis, experiments were done to understand and illustrate the eects of this scaling. This thesis also briey reviews the most recent methods for non-linearly correcting interocular distance to maintain correct depth for the scenes.
3 III CONTENTS 1. Introduction Human visual system Projection and viewing Polarization method Interference Filtering method Eclipse method Autostereoscopic displays Screen size problem and adaptive parallax Resolution specic content Fixing parallax in post-production Automatic disparity correction on an end user device Experiments Tools Experiments Conclusions References A.Appendix
4 IV TERMS AND SYMBOLS IOD NPP LC-Glasses Binocular disparity Convergence Divergence Cardboarding Hyperstereoscopy Negative parallax Positive parallax Interocular distance - Distance between the eyes. 64mm is considered to be the average distance for most people. Native pixel parallax - How large horizontal parallax between views is possible. Liquid Crystal Glasses - Shutter glasses that have electronic liquid crystal shutters. The dierence in the images projected onto the back of the eye (and then onto the visual cortex) because the eyes are separated horizontally by the interocular distance.[1] Movement of eyes towards each other to face the focal point. Movement of eyes away from each other and not converging to a mutual focal point. Artifact from downscaling stereoscopic content that makes the depth appear skewed. Artifact from upscaling stereoscopic content that makes furthest objects appear beyond innity, thus leading the eyes to diverge. The objects appear to be coming out from the display The objects appear to be behind the display
5 1 1. INTRODUCTION Advances in lm making over the recent years, the transformation from standard 35mm lm into the new digital camera systems and possibilities to generate complex visual eects using computers has brought 3D back to mainstream cinema. 3D versions of the biggest blockbusters such as Avatar and Shrek Forever After, have become some of the highest-grossing lms of all time. The growing popularity of 3D has lead studios to release most of their movies also as 3D versions. This has meant that most of the theaters are converting their existing equipment to have 3D capabilities. As the past has shown, when something is shown in theaters it will eventually nd its way into normal consumer products and this transformation is starting to show already. Major consumer electronics companies such as Sony, Panasonic and Samsung are all releasing consumer 3D displays and 3D capable players that employ various means of displaying stereoscopic imagery. More aordable consumer devices means that it's not going to be long before you're able to view 3D content on your laptop or on a mobile device such as a mobile phone. When the sizes of displays on the devices range from a 50" at-screen TV to a mobile phone with a 4" display, this introduces a completely new problem that doesn't exist to this extent with normal 2D content. When stereoscopic 3D content is produced for a specic resolution, usually a cinema presentation of a very high resolution up to 4K (4096 x 3072pixels) and screen sizes up to 30m wide, and this content is later downscaled to t the resolution and display of a mobile phone, it is no longer as attractive visually, due to distortions and artifacts caused by the downscaling. Downscaling causes many dierent issues with the content, but this thesis focuses solely on how it aects the perception of depth. The perceptual depth is only valid on the display and resolution size for which the content was originally created. When the content is resized to t other screens it will appear to be skewed along the z-axis.
6 2 2. HUMAN VISUAL SYSTEM Creating stereoscopic imagery can be simply thought of fooling the human visual system, but because of the level of sophistication our visual system operates at it is extremely important to understand what our eyes are perceiving. Our eyes are generally looking for cues to identify objects, their sizes, relations, positions in scenes, detail, lighting, shadows and occlusion. All of these cues are present in 2D imagery, however the introduction of depth generates additional cues for our eyes to perceive and, if any of these are even slightly o, it generally results in unnatural and not believable 3D imagery. The additional cues that 3D imagery brings are binocular disparity, accommodation and convergence. Binocular disparity is generally considered to be the main visual cue for most people [1]. Parallax or the horizontal dierence between the views, causes the imagery to appear to have a certain depth. This can be called either positive or negative parallax. Positive parallax means that objects are perceived to be behind the screen or zero parallax plane, and objects with negative parallax to appear in front of the zero parallax plane. These are illustrated in gure 2.1. The easiest way to understand convergence and accommodation is to think of a camera. In order to take a good photo of any particular scene, the camera needs to be pointed at the scene and then focused appropriately. The same also applies for the human visual system, our eyes need to rotate so they're facing the focal point of the scene and then the lens for each eye needs to be adjusted accordingly so the scene is in focus. These two steps for the human visual system are called convergence and accommodation respectively. As stereoscopic viewing causes more eye-strain, only a small region between the positive and negative parallax planes is usable for presenting objects. Extreme popout eects that were made using larger than normal negative parallax were extremely popular in the late 90s when the lm studios tried to make 3D lms more popular. These extreme eects are known to cause a lot of eye-strain because the comfortable level of parallax is exceeded. Modern 3D content tends to be designed to pop out less in order to make it possible to view feature length lms comfortably without experiencing eye-strain or nausea. [2]
7 2. Human visual system 3 Figure 2.1: Positive and negative parallaxes illustrated
8 4 3. PROJECTION AND VIEWING There are as many ways to view and display stereoscopic imagery as there is to generate it. The purpose of this chapter is to outline the most common systems and briey look at the advantages and disadvantages of each system. 3.1 Polarization method Polarization systems utilize dierent polarizations to distinguish between left and right images. Both linear or circular polarizations can be used to produce a stereoscopic image, although circular polarization is usually used because of its advantages over linear polarization. Circular polarization enables a viewer to tilt and move their head without disturbing the perception of depth. In polarization systems the picture is projected on to a specially coated silver screen, as reections from non-metallic surfaces destroys the polarization of the light. This allows for the usage of relatively inexpensive passive glasses that only have a dierent polarization lter for each eye. Projectors in polarization systems have usually either a lter that can change the polarity or a special lens assembly that does both the projection and the polarity changing. These systems only need one projector to project the image on to the screen since the images are alternated rather than interlaced. Polarization systems are still the most common 3D projection systems in theaters, but due to the recent boom in consumer electronics and the conversion of existing cinemas to have 3D capabilities, systems based on other methods are surpassing it. RealD Currently the most widely used technology based on polarization is RealD Inc's RealD Cinema. RealD Cinema uses passive glasses and circularly polarized light from a projector that alternates polarization for the left and right eye using a electro-optical liquid crystal modulator called ZScreen. Frames are polarized using opposite polarization for both eyes; clockwise for the right eye and counterclockwise for the left eye. Like other systems based on polarization, RealD Cinema uses a rather expensive silver screen to maintain the polarization of light in the projected image. Frames in RealD Cinema systems are projected using a method titled "Triple Flash" where each frame is displayed three times for each eye, thus resulting in a considerably higher frame rate of 72fps/eye compared to the normal cinematic frame rate
9 3. Projection and viewing 5 of 24fps/eye. The triple ash system is used to help to reduce the eects of icker, ghosting and stuttering in fast horizontal camera movements. [3] Flicker is the same video artifact that was present in old black and white lms where it is possible to see black ashes between the frames. Flicker simply means that the refresh rate is too slow and the viewer notices the changing of frames. Flicker was an especially big problem with the older CRT monitors that used low refresh rates. In ghosting, the viewer sees a halo of the previous frame on top of the current frame, which makes the picture look unsharp. Stuttering means that the motion displayed on screen doesn't look continuous and uid. The eects of stuttering can be easily noticed when viewing videos captured with, for example, a mobile phone camera. These videos usually have a very slow frame rate which means that any fast motion in the videos doesn't look uid. MasterImage 3D MasterImage 3D is a stereoscopic viewing system developed by MasterImage LLC. Like RealD, MasterImage 3D utilizes alternating circular polarization to dierentiate between frames for the left and right eye. Rather than using an electronic optical lter MasterImage 3D system uses a large disc that is placed in front of the projector. This disc is divided into two halves from which one has clockwise and the other counterclockwise polarization. The disc is spun at 4320rpm which results in one frame being displayed three times, and so MasterImage 3D systems have the same frame rates as the RealD systems. MasterImage 3D has a few advantages over RealD, because the system is based on a rotating disc it has better brightness compared to RealD. MasterImage 3D also doesn't need special les to compensate for the light leakage between eyes. MasterImage 3D is a fairly new system in the eld of stereoscopic projection but it has been growing quickly and is now claimed to be the fastest growing digital 3D system in North America and Europe. [4] 3.2 Interference Filtering method Interference ltering technique or wavelength multiplex visualization is based on shifting light and transmitting it at a dierent wavelength to its original wavelength, while still preserving the original color gamut. Rather than using a normal color wheel that has only red, green and blue lters, the wavelength multiplex color wheel has an additional set of red, green and blue lters which enables the light passing through the additional set of lters to shift from its original wavelength. In wavelength multiplex systems this shift in the wavelength is used to separate left and right images, meaning a set of red, green and blue color for each eye is shifted to dierent parts of the spectrum. The glasses used in cinemas using interference
10 3. Projection and viewing 6 Figure 3.1: Wavelength multiplexing illustrated. Purple and orange areas represent dichroic lters for left and right eye respectively. A dichroic lter for each eye is zero everywhere else except around the area where the allowed frequencies for that eye are. This simply means it acts as a bandpass lter picking only specic frequencies. In wavelength multiplexing both left and right frames are multiplexed together and displayed simultaneously so lters like this are needed to separate the views. ltering have complementary lters to the ones found in the color wheel of the projector. Dichroic lters or interference lters that can lter a specic range of colors very accurately are used in the glasses to allow only specic wavelengths to enter each eye. These lters are then used in the glasses to lter out the other set of colors and only allow the set of red, green and blue that is correct for each eye to enter it. [6] Figure 3.1 illustrates this ltering procedure. Dolby 3D The Dolby 3D Digital Cinema system developed by Dolby Laboratories Inc. uses wavelength multiplex visualization to achieve its stereoscopic eect. Dolby 3D Cinema is built using standard Dolby components and Dolby Digital Cinema projector which retains the compatibility with 2D movies while still allowing inexpensive conversion to 3D by simply changing the projectors color wheel to a 3D color wheel with an additional set of red, green and blue lters. Cinemas using Dolby 3D and wavelength multiplexing can use their existing screen for projection since wavelength multiplexing doesn't need special silver screens to maintain the properties of the projected light. Projectors using wavelength multiplexing are able to show both the left and the right frames simultaneously, with richer and more realistic colors and a sharper image. [5, 6] 3.3 Eclipse method The eclipse method of showing 3D imagery has the same basic idea as the polarization system; projected frames are alternated. In the eclipse systems, rather than
11 3. Projection and viewing 7 needing to invest in relatively expensive specially coated screens, a normal screen can be used since the stereoscopic eect is achieved by blocking eyes using special liquid crystal shutter glasses or LC glasses. These glasses have shutters that are opened and closed in synchronization with the projector alternating the left and right frames. Eclipse method based systems only need an image source that's capable of displaying higher than normal frame rates and transmit the synchronization data either wirelessly or via a cable. Nearly all of the current consumer 3D displays use this method for showing stereoscopic images because the eclipse method only needs a transmitter to make sure the glasses are synchronized with the display and a panel capable of a high enough refresh rate. A major drawback in this 3D technology and why it's only slowly catching up is due to the expensive shutter glasses, with most models being hundreds of euros. These electronic glasses need to have shutters for blocking each eye and a receiver for receiving the synchronization data. The lens of each eye on the glasses contains a thin liquid crystal layer which can be used to block the view of the screen from the eye based on the timing signal sent by the video source. Since each eye can see the whole frame at a time without any ltering eclipse systems have more neutral colors and viewers are able to see the whole color spectrum. Because LC glasses have shutters this usually results in icker. Since the frames for eyes are alternated the viewer is actually perceiving only half of the actual refresh rate of the source video. This means that refresh rates of monitors and projectors need to be doubled to reduce icker. There are many manufacturers who are manufacturing LC glasses for home-viewing, including all the major consumer electronics companies such as Panasonic, Samsung and Sony. Other manufacturers like, XpanD 3D or Nvidia, are focusing on more specic marketing segments like cinema or PC. XpanD 3D is currently the most well known manufacturer of eclipse based cinema systems, with over 1000 cinemas using their LC glasses. 3.4 Autostereoscopic displays Autostereoscopic displays allow viewers to see stereoscopic images without the need for any special viewing aids such as shutter- or polarized glasses. There are many dierent types of autostereoscopic displays, such as parallax barrier, lenticular, volumetric and holographic. Due to limitations in each of the display technologies, the most used technologies employ either parallax barriers or lenticular lenses. Both the display types use optical elements added on top of the surface of the screen. These optical elements scatter the light emitted from the screen in a specic way where each of the eyes are receiving a dierent image thus resulting in a stereoscopic image. Because light is scattered or redirected to specic places this means that both of these screen types have very limited viewing regions where the stereoscopic image is visible. Images for autostereoscopic displays are usually interlaced and images for
12 3. Projection and viewing 8 left and right eye are alternating on every vertical pixel row. One of the biggest problems for these display types is the halving of the horizontal resolution due to the alternating rows, so either the number of horizontal pixels needs to be doubled in order to have the same resolution or the resolution of the source material needs to be halved. Lenticular Lenticular autostereoscopic displays usually have long sheets of narrow lenses layered on top of each vertical row of the display. The purpose of these lenticular lenses is to direct light from each vertical row in dierent directions. This generates the problem that only horizontal parallax information is visible which, in most cases, is enough to provide acceptable stereoscopic eect. However, it is also possible to have spherical lenses on top of each pixel, which allows a viewer to see both horizontally and vertically varying parallax. Although autostereoscopic displays with spherical lenses in theory provide a more realistic stereoscopic image, lenticular displays with long cylindrical lens arrays over vertical rows are still much more common. [7] Parallax Barrier Parallax barrier displays use the same principle as lenticular displays, but rather than having a layer of lenses on top of the screen, they have as the name illustrates, a barrier. The parallax barrier is a layer of narrow vertical slits that, when viewed from the correct distance, allows the viewer's eyes to see dierent vertical rows. This property is then used in similar fashion as in lenticular displays to show interlaced images where rows alternate showing a dierent image to each eye. Because parallax barrier uses slits rather than lenses to dierentiate the vertical rows it generates several problems such as: the viewer might be able to view under the slits neighbor, the viewer may experience repeating perspective when moving around and due to the nature of the slits, there's always some discontinuity or dark lines present in the image. [7]
13 9 4. SCREEN SIZE PROBLEM AND ADAPTIVE PARALLAX Despite recent technological advances stereoscopic content production and display are facing major issues, including limitations of the viewing devices, the considerable expertise needed to lm, direct and edit stereoscopic material, and the fact that the human visual system is extremely good in spotting even the smallest inconsistencies. All of these contribute to making the perceived depth eect of the stereoscopic content to either completely shatter or to be severely disrupted. While convergence can be accurately reproduced using modern display technology, most of the other depth cues cannot be reproduced in a way that the generated content looks natural. Depth cues like eye accommodation are very hard to recreate using only a at projection surface like a TV or a mobile phone[8]. The huge variety of dierent end user devices and their dierent specications create another major problem in making stereoscopic content look natural. 4.1 Resolution specic content The introduction of stereoscopic 3D has brought new problems for creating natural looking content for dierent end user devices. Generally stereoscopic content can be considered display size and resolution dependent. The created content will only look as originally intended on end user devices that have the same display size and resolution as the parameters to which it was optimized upon creation. Whether stereoscopic content is created using 3D animation software or lming on location, these parameters including interocular distance, camera baseline distance, desired positive and negative disparity ranges along with the desired display device and its properties like resolution and size, need to be decided before the actual lming is started because they play important role in how natural the nal content will look. All of these parameters need to be taken into account before starting to produce the content. None of these parameters can be naturally adjusted after the initial creation of the content, making stereoscopic content production extremely inexible compared to the standard content production. Now the research on stereoscopic content creation is more and more focused on making the content independent to the display device specications and to avoid artifacts that result from scaling such as cardboarding or hyperstereoscopy.
14 4. Screen size problem and adaptive parallax 10 Figure 4.1: Hyperstereoscopy - Problem when upscaling content. When the parallax between views is too great, eyes don't converge anymore and start to diverge meaning the objects are beyond innity. Cardboarding is a problem that usually appears when the content is downscaled to t a smaller display, whereas hyperstereoscopy is the exact opposite and is usually visible when content intended for a small display is upscaled to t a larger display. In cardboarding, when the content is downscaled the interocular distance is also downscaled by the same integer factor, however, since interocular distance doesn't scale linearly it means that the content will appear to have a skewed depth. Objects in the scene still have some depth but as the incorrect interocular distance is incorrect the objects look like cardboard cutouts, hence the term cardboarding. When content for a small display is upscaled to t a larger display the interocular distance is again changed by an integer factor. Since the interocular distance is increased it also means that the disparity range for the scene changes. This usually results in diverging eyes rather than converging, so the eyes will start pointing away from each other, whereas in the normal case the gaze from both eyes meets somewhere in front of the viewer. When the gaze from both eyes doesn't meet, it means from the viewers point of view that the objects are beyond innity. An illustration of this is: In the original scene for the small display the objects furthest away are 300 units away from the viewer which translates to a maximum parallax of 200px on the screen. The scene is then upscaled by a factor of two to t a larger display, so the parallax is also increased from 200px to 400px. If the maximum allowed parallax for the larger screen is, for example, 300px the scaled content exceeds the maximum parallax so the eyes diverge and the content appears beyond innity. Figure 4.1 illustrates this problem.
15 4. Screen size problem and adaptive parallax Fixing parallax in post-production There are various ways for adapting the created stereoscopic content for dierent display sizes and resolutions. One way to accomplish this task is to use dierent methods of adjusting the parameters articially in post-production. The main advantage of this approach is that it can be done with the original director or cinematographer of the content and so it can be ensured that the content looks as intended. Another advantage is that powerful computers and renderfarms can be used to utilize sophisticated signal processing algorithms and to re-render the created content. 4.3 Automatic disparity correction on an end user device End user devices can range from really powerful desktop computers, to TVs and mobile phones. This means there are hundreds of dierent screen size and resolution combinations, so hundreds of dierent versions of the same content need to be made to suit the end users devices. In order to make stereoscopic 3D a feasible option, some of this scaling and display adaptation needs to be performed not by the content creator but by the end user device. Lang et al. researched this issue in their paper titled Nonlinear Disparity Mapping for Stereoscopic 3D. In the paper, they introduced a lightweight method for correcting disparity ranges, that can be implemented on devices that don't have massive amounts of processing power. The method presented in the paper is based on simple image warping rather than complex camera parameter adjustments or stereo regeneration techniques. The low computational complexity of this method makes it possible to run this processing on-the-y when the video is played on the device [8].
16 12 5. EXPERIMENTS To illustrate the eects of varying camera parameters, stereoscopic 3D material was rendered for the experiments. This material was composed of various images and videos rendered from dierent scenes of the open source movie Big Buck Bunny. Scenes from Big Buck Bunny movie provided an ideal base for the test renderings because all the original material and an entire studio backup, including project les, can be freely downloaded from the project website org/. The original version of the movie is rendered entirely in standard 2D which meant that for the purpose of the stereoscopic tests, the scenes needed to be rerendered into various stereoscopic 3D formats. 5.1 Tools Matlab R2007b was used for two main tasks; to create images to nd the comfortable disparity ranges for the screens and to accomplish the row interleaving needed by the displays. Stereoscopic Viewer was used to display videos and images on the displays. Blender was used to render scenes from Big Buck Bunny movie. It was run on Ubuntu Linux platform. Blender Stereoscopic Rendering plugin v was used to transform the Big Buck Bunny movie scenes' original camera into two new cameras, left and right. These were then used for the left and right views for stereoscopic viewing [9]. For the experiments, three dierent display devices were used, Vuon 46" 3D capable LCD-TV, Sharp 15" Laptop with Autostereoscopic Display and Acer 15.6" Laptop with 3D capability. 5.2 Experiments The rst thing that was done before any of the material was rendered was to establish a baseline for disparity ranges. In order to nd disparity ranges a script was written in Matlab that generated a series of images of a white cube on a black background. Each consecutive image added 1 pixel to the oset of the cubes which meant a total native pixel parallax change of 2 pixels. The oset ranges were from 0 pixels to 200 pixels for both the positive and negative parallax. Due to the sharp edges of the cube and the big contrast change from 100% black to white, meant when these
17 5. Experiments 13 Display Resolution(r x x r y ) NPP(px) Theoretical NPP (px) Vuon 46" 3D TV 1920x1080 ±50 ±120 Sharp Laptop 15" 1024x768 ±15 ±215 Acer Laptop 15.6" 1920x1080 ±20 ±356 Table 5.1: This table contains measured and theoretical Native Pixel Parallax (NPP) values for each display that was used in the experiments parallax values were used in the actual test renderings, the rendered scenes were always within the comfort zone for stereoscopic viewing. The baseline values for each display were found by circulating images until an image was found that showed either signicant crosstalk between views or the stereoscopic eect was lost for some other reason. The limiting factor for parallax for all the screens was always the crosstalk between views. Table 5.1 contains the measured values for each display. The next step in the process was to verify the disparity ranges and scene parameters using equation (5.1): NP P = IOD s cos(arctan( ry r x )) r x (5.1) This equation describes in pixels the distance between the eyes when mapped on a specic screen. Parameters in the equation are human interocular distance IOD in centimeters, the display diagonal size s in centimeters, r x and r y correspond to the horizontal and vertical resolution of the screen respectively. The equation gives the display's native pixel parallax. The NPP thus forms the upper boundary that must not be overstepped [10]. Two scenes were selected for the test renderings. The dierent properties of each scene made them good test scenes. Scene 1 - Rabbit had multiple overlapping and occluding objects and a continuous movement from the far parallax plane of the scene towards camera all the way to the near parallax plane of the scene. Scene 2 - River was more static in terms of movement and had only a few occluding objects but the main reasons this scene was chosen were the ne details of the grass and the lighting and shadows of the scene, particularly from the grass. These scenes were then converted to stereoscopic 3D using the Blender Stereoscopic rendering plug-in developed by Sebastian Schneider [9]. This plug-in converts a single camera in the scene into a stereoscopic camera rig, preserving the original camera parameters such as the depth of eld. This camera rig is illustrated in Figure A.1 in Appendix A. The plug-in makes it possible to place near and far planes into the scene. After the planes are set at the correct distances, the plug-in then calculates the parallaxes for both the near and far plane. The calculated parallaxes
18 5. Experiments 14 Display Scene IOD (BU) NP FP Vuon 46" 3D TV Scene 1 - Rabbit Vuon 46" 3D TV Scene 2 - River Sharp Laptop 15" Scene 1 - Rabbit Sharp Laptop 15" Scene 2 - River Acer Laptop 15.6" Scene 1 - Rabbit Acer Laptop 15.6" Scene 2 - River Table 5.2: This table describes Interocular distance in Blender Units, Near- and far parallaxes for each scene can then be compared against the NPP of each display to verify that the parallaxes are correct. The resulting stereoscopic camera rig parameters for the test scenes are illustrated in Table 5.2. For the actual experiments, all the material was optimized for Vuon 46" 3D TV and then viewed with the other displays to see how the resolution and screen size would aect the perceived depth. Renderings were done using NPP values obtained in the screen tests. The limiting factor for all of the scenes were the near objects (objects in front of the zero parallax plane) which meant that the maximum parallax obtained by objects behind zero parallax plane was considerably less than the maximum allowed NPP, generally about 60% of the maximum. All the resulting scenes from the test renderings provided an optimal baseline because of the natural and pleasant depth eect. After the scenes for the Vuon display were rendered, the Blender stereoscopic rendering plug-in was used to calculate the near and far parallaxes for the other two displays to see how they corresponded to the ranges obtained in the screen tests. These values can be found in Table 5.2. The values calculated for the other displays illustrated that near parallaxes are around two times bigger than was found to be optimal in the screen tests. Because the far parallax in the Vuon optimized scene is considerably less than is allowed, far parallaxes for the other displays are just within ranges for every other scene except for Scene 1 Rabbit when using Acer Laptop. Since the parallaxes are not within the allowed range, the content will appear to be skewed along the Z-axis and will only appear to have correct depth on the display for which it was originally intended. If interocular distance is halved for both scenes it brings both near and far ranges within allowed ranges for other displays but means a much shallower depth eect for the Vuon display since the allowed interocular distance is twice as much. This clearly illustrates the problems that arise from the use of various screen sizes and resolutions when displaying stereoscopic content.
19 15 6. CONCLUSIONS When generating content for dierent displays, it is important to take into consideration the properties of the intended display device. If the parameters are not scaled and tted to the particular display, it results in an unnatural and disrupted depth eect. Results from the experiments done for this paper conrm this. In order to make stereoscopic 3D a viable video format for the future regardless of the end-user device, these problems need to be addressed, for example, by using the image warping techniques discussed in chapter 3.
20 16 REFERENCES [1] Burke, P. Calculating stereo pairs [WWW]. [referred ]. Accessible from: stereographics/stereorender/. [2] IJsselsteijn, W.A., de Ridder, H., Vliegen, J. Subjective evaluation of stereoscopic images: Eect of camera parameters and display duration. IEEE Transactions on circuits and systems for video technology 10(2000)2, pp [3] RealD Inc. RealD Cinema system technical specications [WWW]. [referred ]. Accessible from: [4] MasterImage LLC MasterImage 3D Cinema system product brief leaet. [5] Dolby Laboratories Inc. Dolby 3D Digital Cinema technical specications [WWW]. [referred ]. Accessible from: professional/technology/cinema/dolby-3ddigital.html. [6] Jorke, H., Fritz, M Intec - A new stereoscopic visualisation tool by wavelength multiplex imaging. Ulm, Germany. 7 p. [7] Halle, M. Autostereoscopic displays and computer graphics. Computer Graphics, ACM SIGGRAPH 31(1997)2, pp e [8] Lang, M., Hornung, A., Wang, O., Poulakos, S., Smolic, A., Gross, M Nonlinear Disparity Mapping for Stereoscopic 3D. ACM Trans. Graph. 29, 4, Article 75 (July 2010), 10 p. [9] Schneider, S. Stereoscopic rendering in Blender[WWW]. [referred ]. Accessible from: [10] Ide, K., Sikora, T., 2010 Adaptive Parallax for 3D Television. 3DTV- Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 4 p.
21 17 A. APPENDIX Figure A.1: Blender camera setup and all the adjusted parameters
22 A. Appendix 18 Figure A.2: Test Scene 1 - Rabbit Figure A.3: Test Scene 2 - River
Prof. Feng Liu. Spring /27/2014
Prof. Feng Liu Spring 2014 http://www.cs.pdx.edu/~fliu/courses/cs510/ 05/27/2014 Last Time Video Stabilization 2 Today Stereoscopic 3D Human depth perception 3D displays 3 Stereoscopic media Digital Visual
More informationDevices displaying 3D image. RNDr. Róbert Bohdal, PhD.
Devices displaying 3D image RNDr. Róbert Bohdal, PhD. 1 Types of devices displaying 3D image Stereoscopic Re-imaging Volumetric Autostereoscopic Holograms mounted displays, optical head-worn displays Pseudo
More informationMahdi Amiri. May Sharif University of Technology
Course Presentation Multimedia Systems 3D Technologies Mahdi Amiri May 2014 Sharif University of Technology Binocular Vision (Two Eyes) Advantages A spare eye in case one is damaged. A wider field of view
More informationCOMS W4172 Perception, Displays, and Devices 3
COMS W4172 Perception, Displays, and Devices 3 Steven Feiner Department of Computer Science Columbia University New York, NY 10027 www.cs.columbia.edu/graphics/courses/csw4172 February 20, 2018 1 What
More informationCS 563 Advanced Topics in Computer Graphics Stereoscopy. by Sam Song
CS 563 Advanced Topics in Computer Graphics Stereoscopy by Sam Song Stereoscopy Introduction Parallax Camera Displaying and Viewing Results Stereoscopy What is it? seeing in three dimensions creates the
More informationA Qualitative Analysis of 3D Display Technology
A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract
More informationRealtime 3D Computer Graphics Virtual Reality
Realtime 3D Computer Graphics Virtual Reality Human Visual Perception The human visual system 2 eyes Optic nerve: 1.5 million fibers per eye (each fiber is the axon from a neuron) 125 million rods (achromatic
More informationlecture 10 - depth from blur, binocular stereo
This lecture carries forward some of the topics from early in the course, namely defocus blur and binocular disparity. The main emphasis here will be on the information these cues carry about depth, rather
More informationNatural Viewing 3D Display
We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,
More informationMultidimensional image retargeting
Multidimensional image retargeting 9:00: Introduction 9:10: Dynamic range retargeting Tone mapping Apparent contrast and brightness enhancement 10:45: Break 11:00: Color retargeting 11:30: LDR to HDR 12:20:
More informationVideo Communication Ecosystems. Research Challenges for Immersive. over Future Internet. Converged Networks & Services (CONES) Research Group
Research Challenges for Immersive Video Communication Ecosystems over Future Internet Tasos Dagiuklas, Ph.D., SMIEEE Assistant Professor Converged Networks & Services (CONES) Research Group Hellenic Open
More informationAn Introduction to 3D Computer Graphics, Stereoscopic Image, and Animation in OpenGL and C/C++ Fore June
An Introduction to 3D Computer Graphics, Stereoscopic Image, and Animation in OpenGL and C/C++ Fore June Chapter 15 Stereoscopic Displays In chapters 8 through 10, we have discussed the principles and
More informationSimplicity vs. Flexibility
Simplicity vs. Flexibility An integrated system approach to stereography 1 The old Business Of 3D Technology 2 The old Business Of 3D 20s 50s 90s 3 So is this time different? 4 Teleoperation Travel $2.7
More informationStereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo
Stereo Viewing Stereo Stereograms Autostereograms Depth from Stereo 3D (Depth) from 2D 3D information is lost by projection. How do we recover 3D information? Image 3D Model Depth Cues Shadows: Occlusions:
More informationAutomatic 2D-to-3D Video Conversion Techniques for 3DTV
Automatic 2D-to-3D Video Conversion Techniques for 3DTV Dr. Lai-Man Po Email: eelmpo@cityu.edu.hk Department of Electronic Engineering City University of Hong Kong Date: 13 April 2010 Content Why 2D-to-3D
More informationMobile 3D Display Technology to Realize Natural 3D Images
3D Display 3D Image Mobile Device Special Articles on User Interface Research New Interface Design of Mobile Phones 1. Introduction Nowadays, as a new method of cinematic expression continuing from the
More informationCSE 165: 3D User Interaction. Lecture #3: Displays
CSE 165: 3D User Interaction Lecture #3: Displays CSE 165 -Winter 2016 2 Announcements Homework Assignment #1 Due Friday at 2:00pm To be presented in CSE lab 220 Paper presentations Title/date due by entering
More informationInteraxial Distance and Convergence Control for Efficient Stereoscopic Shooting using Horizontal Moving 3D Camera Rig
Interaxial Distance and Convergence Control for Efficient Stereoscopic Shooting using Horizontal Moving 3D Camera Rig Seong-Mo An, Rohit Ramesh, Young-Sook Lee and Wan-Young Chung Abstract The proper assessment
More informationCSE 165: 3D User Interaction
CSE 165: 3D User Interaction Lecture #4: Displays Instructor: Jurgen Schulze, Ph.D. CSE 165 - Winter 2015 2 Announcements Homework Assignment #1 Due tomorrow at 1pm To be presented in CSE lab 220 Homework
More informationDepartment of Photonics, NCTU, Hsinchu 300, Taiwan. Applied Electromagnetic Res. Inst., NICT, Koganei, Tokyo, Japan
A Calibrating Method for Projected-Type Auto-Stereoscopic 3D Display System with DDHOE Ping-Yen Chou 1, Ryutaro Oi 2, Koki Wakunami 2, Kenji Yamamoto 2, Yasuyuki Ichihashi 2, Makoto Okui 2, Jackin Boaz
More informationThe Graphics Pipeline and OpenGL IV: Stereo Rendering, Depth of Field Rendering, Multi-pass Rendering!
! The Graphics Pipeline and OpenGL IV: Stereo Rendering, Depth of Field Rendering, Multi-pass Rendering! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 6! stanford.edu/class/ee267/!!
More informationStereoscopic Presentations Taking the Difficulty out of 3D
Stereoscopic Presentations Taking the Difficulty out of 3D Andrew Woods, Centre for Marine Science & Technology, Curtin University, GPO Box U1987, Perth 6845, AUSTRALIA Email: A.Woods@cmst.curtin.edu.au
More informationStereoscopic Systems Part 1
Stereoscopic Systems Part 1 Terminology: Stereoscopic vs. 3D 3D Animation refers to computer animation created with programs (like Maya) that manipulate objects in a 3D space, though the rendered image
More informationVolumetric Hyper Reality: A Computer Graphics Holy Grail for the 21st Century? Gavin Miller Apple Computer, Inc.
Volumetric Hyper Reality: A Computer Graphics Holy Grail for the 21st Century? Gavin Miller Apple Computer, Inc. Structure of this Talk What makes a good holy grail? Review of photo-realism Limitations
More informationMultimedia Technology CHAPTER 4. Video and Animation
CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia
More informationWhy should I follow this presentation? What is it good for?
Why should I follow this presentation? What is it good for? Introduction into 3D imaging (stereoscopy, stereoscopic, stereo vision) S3D state of the art for PC and TV Compiling 2D to 3D Computing S3D animations,
More informationReprint. from the Journal. of the SID
A 23-in. full-panel-resolution autostereoscopic LCD with a novel directional backlight system Akinori Hayashi (SID Member) Tomohiro Kometani Akira Sakai (SID Member) Hiroshi Ito Abstract An autostereoscopic
More informationA Fast Image Multiplexing Method Robust to Viewer s Position and Lens Misalignment in Lenticular 3D Displays
A Fast Image Multiplexing Method Robust to Viewer s Position and Lens Misalignment in Lenticular D Displays Yun-Gu Lee and Jong Beom Ra Department of Electrical Engineering and Computer Science Korea Advanced
More informationSID Display Week Yasuhiro Takaki. Institute of Symbiotic and Technology. Tokyo Univ. of Agri. & Tech.
Three-Dimensional i Displays: Present and Future Yasuhiro Takaki Institute of Symbiotic and Technology Tokyo University it of Agriculture and Technology 1 Outline 1. Introduction 2. Human Factors 3. Current
More informationChapter 15. Light Waves
Chapter 15 Light Waves Chapter 15 is finished, but is not in camera-ready format. All diagrams are missing, but here are some excerpts from the text with omissions indicated by... After 15.1, read 15.2
More informationOpenStax-CNX module: m Polarization * Bobby Bailey. Based on Polarization by OpenStax
OpenStax-CNX module: m52456 1 27.9 Polarization * Bobby Bailey Based on Polarization by OpenStax This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 4.0 Abstract
More informationAnimation Susen Rabold. Render. Engineers
Animation 12-11-2009 Render 3D Engineers Susen Rabold Animation Computer animation started as early as the end of the 1950 s IBM and General Motors created Design augmented by Computers (DAC) Film animation
More informationA SXGA 3D Display Processor with Reduced Rendering Data and Enhanced Precision
A SXGA 3D Display Processor with Reduced Rendering Data and Enhanced Precision Seok-Hoon Kim KAIST, Daejeon, Republic of Korea I. INTRODUCTION Recently, there has been tremendous progress in 3D graphics
More informationCS 498 VR. Lecture 20-4/11/18. go.illinois.edu/vrlect20
CS 498 VR Lecture 20-4/11/18 go.illinois.edu/vrlect20 Review from last lecture Texture, Normal mapping Three types of optical distortion? How does texture mipmapping work? Improving Latency and Frame Rates
More informationUnit 9 Light & Optics
Unit 9 Light & Optics 1 A quick review of the properties of light. Light is a form of electromagnetic radiation Light travels as transverse waves having wavelength and frequency. fλ=c The velocity of EMR
More informationTechnical Brief. 3D Stereo. Consumer Stereoscopic 3D Solution
Technical Brief 3D Stereo Consumer Stereoscopic 3D Solution NVIDIA 3D Stereo Background Imagine immersing yourself in the world of 3D content like never before. Monsters, bullets, and landscapes jump out
More informationPerception, Part 2 Gleitman et al. (2011), Chapter 5
Perception, Part 2 Gleitman et al. (2011), Chapter 5 Mike D Zmura Department of Cognitive Sciences, UCI Psych 9A / Psy Beh 11A February 27, 2014 T. M. D'Zmura 1 Visual Reconstruction of a Three-Dimensional
More informationFundamentals of Photography presented by Keith Bauer.
Fundamentals of Photography presented by Keith Bauer kcbauer@juno.com http://keithbauer.smugmug.com Homework Assignment Composition Class will be February 7, 2012 Please provide 2 images by next Tuesday,
More informationComputational Aesthetics for Rendering Virtual Scenes on 3D Stereoscopic Displays
Computational Aesthetics for Rendering Virtual Scenes on 3D Stereoscopic Displays László SZIRMAY-KALOS, Pirkko OITTINEN, and Balázs TERÉKI Introduction Computer graphics builds virtual scenes that are
More informationspecular diffuse reflection.
Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature
More informationOptimized visualization on portable autostereoscopic displays. Atanas Boev Jarkko Pekkarinen Atanas Gotchev
Optimized visualization on portable autostereoscopic displays Atanas Boev Jarkko Pekkarinen Atanas Gotchev Project No. 216503 Optimized visualization on portable autostereoscopic displays Atanas Boev,
More information4. Refraction. glass, air, Perspex and water.
Mr. C. Grima 11 1. Rays and Beams A ray of light is a narrow beam of parallel light, which can be represented by a line with an arrow on it, in diagrams. A group of rays makes up a beam of light. In laboratory
More informationSmoothing Region Boundaries in Variable Depth Mapping for Real Time Stereoscopic Images
Smoothing Region Boundaries in Variable Depth Mapping for Real Time Stereoscopic Images Nick Holliman Department of Computer Science, University of Durham, Durham, United Kingdom ABSTRACT We believe the
More information3D Autostereoscopic Display Image Generation Framework using Direct Light Field Rendering
3D Autostereoscopic Display Image Generation Framework using Direct Light Field Rendering Young Ju Jeong, Yang Ho Cho, Hyoseok Hwang, Hyun Sung Chang, Dongkyung Nam, and C. -C Jay Kuo; Samsung Advanced
More information2010 Intel Core processor family (Intel Core i3/i5/i7)
CLIENT GRAPHICS 2 nd Generation Intel Core now with BuiltIn Visuals, Available on Select Models of the 2 nd Generation Intel Core Family Built-In Visuals Built for Mainstream Desktop and Mobile PC Users
More information3.3 Implementation of a Lenticular 3D Display
56 Chapter 3 integral imaging can be understood as the number of different pixel data within a certain viewing angle. The angular resolution is determined by the number of pixels on the flat-panel display
More informationA Simple Viewfinder for Stereoscopic Video Capture Systems
A Simple Viewfinder for Stereoscopic Video Capture Systems Cary Kornfeld Departement Informatik ETH Zürich CH 8092 Zürich, Switzerland Cary.Kornfeld@inf.ethz.ch Abstract The emergence of low cost digital
More informationStereoscopic media. 3D is hot today. 3D has a long history. 3D has a long history. Digital Visual Effects Yung-Yu Chuang
3D is hot today Stereoscopic media Digital Visual Effects Yung-Yu Chuang 3D has a long history 1830s, stereoscope 1920s, first 3D film, The Power of Love projected dual-strip in the red/green anaglyph
More informationCHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object
CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM 2.1 Theory and Construction Target Object Laser Projector CCD Camera Host Computer / Image Processor Figure 2.1 Block Diagram of 3D Areal Mapper
More informationHow to Shoot Good 3D with The Stereocam 3D Camcorder Adapter
How to Shoot Good 3D with The Stereocam 3D Camcorder Adapter By How to Shoot Good 3D With The Stereocam 3D Camcorder Adapter Introduction The Stereocam 3D Camcorder Adapter is a "break-through" product.
More informationHomework #1. Displays, Image Processing, Affine Transformations, Hierarchical Modeling
Computer Graphics Instructor: Brian Curless CSE 457 Spring 215 Homework #1 Displays, Image Processing, Affine Transformations, Hierarchical Modeling Assigned: Thursday, April 9 th Due: Thursday, April
More informationLecture 14, Video Coding Stereo Video Coding
Lecture 14, Video Coding Stereo Video Coding A further application of the tools we saw (particularly the motion compensation and prediction) is stereo video coding. Stereo video is used for creating a
More informationExtended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses
Proceedings of the 2 nd World Congress on Electrical Engineering and Computer Systems and Science (EECSS'16) Budapest, Hungary August 16 17, 2016 Paper No. MHCI 112 DOI: 10.11159/mhci16.112 Extended Fractional
More informationComparison of Accommodation and Convergence by Simultaneous Measurements during 2D and 3D Vision Gaze
Comparison of Accommodation and Convergence by Simultaneous Measurements during 2D and 3D Vision Gaze Hiroki Hori 1, Tomoki Shiomi 1, Tetsuya Kanda 1, Akira Hasegawa 1, Hiromu Ishio 1, Yasuyuki Matsuura
More informationAstronomy Lab Lenses and Telescopes
Astronomy Lab Lenses and Telescopes OBJECTIVES: Recognize a meter, a centimeter, and a millimeter. Correctly measure distances in mm, cm, and m. Describe the appearance of both a converging lens and a
More informationFSST Passive 3D Kit for Blackwing (3D models) ASSEMBLY AND CALIBRATION. Part. No.: R Ref: R Rev: 01
FSST Passive 3D Kit for Blackwing (3D models) ASSEMBLY AND CALIBRATION www.cineversum.com Ref: R1048214-222 Rev: 01 Part. No.: R599825 Changes Cineversum provides this manual as is without warranty of
More informationTECHNICAL ANALYSIS OF ANALOGIES OF STEREO DISPLAYING TECHNIQUES WITH 3D GENERATED SCENES IN VISUALIZATION
DAAAM INTERNATIONAL SCIENTIFIC BOOK 2008 pp. 789-796 CHAPTER 64 TECHNICAL ANALYSIS OF ANALOGIES OF STEREO DISPLAYING TECHNIQUES WITH 3D GENERATED SCENES IN VISUALIZATION SKALA, T.; TODOROVAC, M. & MRVAC,
More information02/18/2011. Performance of Stereo 3D systems for Digital Cinema and 3DTV. HPA Retreat 2011 Wolfgang Ruppel
02/18/2011 Measurement of the Performance of Stereo 3D systems for Digital Cinema and 3DTV HPA Retreat 2011 Wolfgang Ruppel 01 GHOSTING BASICS 2 GHOSTING BASICS The term stands for the perception of leakage
More informationThe topics are listed below not exactly in the same order as they were presented in class but all relevant topics are on the list!
Ph332, Fall 2016 Study guide for the final exam, Part Two: (material lectured before the Nov. 3 midterm test, but not used in that test, and the material lectured after the Nov. 3 midterm test.) The final
More informationUniversiteit Leiden Computer Science
Universiteit Leiden Computer Science Optimizing octree updates for visibility determination on dynamic scenes Name: Hans Wortel Student-no: 0607940 Date: 28/07/2011 1st supervisor: Dr. Michael Lew 2nd
More informationBachelor thesis. Independent degree project - first cycle. Datateknik Computer Science
Bachelor thesis Independent degree project - first cycle Datateknik Computer Science Simulations in 3D research Can Unity3D be used to simulate a 3D display? Master's thesis Oskar Andersson Two ye i MID
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationStereo Graphics. Visual Rendering for VR. Passive stereoscopic projection. Active stereoscopic projection. Vergence-Accommodation Conflict
Stereo Graphics Visual Rendering for VR Hsueh-Chien Chen, Derek Juba, and Amitabh Varshney Our left and right eyes see two views, which are processed by our visual cortex to create a sense of depth Computer
More informationReal-time Integral Photography Holographic Pyramid using a Game Engine
Real-time Integral Photography Holographic Pyramid using a Game Engine Shohei Anraku, Toshiaki Yamanouchi and Kazuhisa Yanaka Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa-ken,
More informationScaling of Rendered Stereoscopic Scenes
University of West Bohemia in Pilsen Department of Computer Science and Engineering Univerzitni 8 30614 Pilsen Czech Republic Scaling of Rendered Stereoscopic Scenes Master Thesis Report Ricardo José Teixeira
More informationCrosstalk reduces the amount of depth seen in 3D images of natural scenes
Crosstalk reduces the amount of depth seen in 3D images of natural scenes Inna Tsirlin *, Robert S. Allison and Laurie M. Wilcox Centre for Vision Research, York University, 4700 Keele st., Toronto, ON,
More informationAugmenting Reality with Projected Interactive Displays
Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection
More informationHow to Use the Luminit LSD Scatter Model
How to Use the Luminit LSD Scatter Model Summary: This article describes the characteristics and use of Luminit s LSD scatter model in OpticStudio. The scatter model presented here is the idealized scatter
More informationzspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0
zspace Developer SDK Guide - Introduction Version 1.0 zspace.com Developer s Guide Rev 1.0 zspace, Inc. 2015. zspace is a registered trademark of zspace, Inc. All other trademarks are the property of their
More informationBasic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues
Distance, depth, and 3D shape cues Pictorial depth cues: familiar size, relative size, brightness, occlusion, shading and shadows, aerial/ atmospheric perspective, linear perspective, height within image,
More informationtwo using your LensbAby
two Using Your Lensbaby 28 Lensbaby Exposure and the Lensbaby When you attach your Lensbaby to your camera for the first time, there are a few settings to review so that you can start taking photos as
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationStereo Movie Viewer 2.17 User Manual
Stereo Movie Viewer 2.17 User Manual OctoNus Software 7 June 2011 Introduction The Stereo movie viewer allows playing movies and stereo movies in FLM format or series of photos in BMP or JPG formats. It
More informationPHY 112: Light, Color and Vision. Lecture 11. Prof. Clark McGrew Physics D 134. Review for Exam. Lecture 11 PHY 112 Lecture 1
PHY 112: Light, Color and Vision Lecture 11 Prof. Clark McGrew Physics D 134 Review for Exam Lecture 11 PHY 112 Lecture 1 From Last Time Lenses Ray tracing a Convex Lens Announcements The midterm is Thursday
More informationWhat is it? How does it work? How do we use it?
What is it? How does it work? How do we use it? Dual Nature http://www.youtube.com/watch?v=dfpeprq7ogc o Electromagnetic Waves display wave behavior o Created by oscillating electric and magnetic fields
More informationIntroduction to Computer Graphics (CS602) Lecture No 03 Graphics Systems
Introduction to Computer Graphics (CS602) Lecture No 03 Graphics Systems 3.1 Raster-Scan Systems Interactive raster graphics systems typically employ several processing units. In addition to the CPU, a
More informationLast update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1
Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus
More informationDispersion (23.5) Neil Alberding (SFU Physics) Physics 121: Optics, Electricity & Magnetism Spring / 17
Neil Alberding (SFU Physics) Physics 121: Optics, Electricity & Magnetism Spring 2010 1 / 17 Dispersion (23.5) The speed of light in a material depends on its wavelength White light is a mixture of wavelengths
More informationBinocular cues to depth PSY 310 Greg Francis. Lecture 21. Depth perception
Binocular cues to depth PSY 310 Greg Francis Lecture 21 How to find the hidden word. Depth perception You can see depth in static images with just one eye (monocular) Pictorial cues However, motion and
More informationTFT-LCD Technology Introduction
TFT-LCD Technology Introduction Thin film transistor liquid crystal display (TFT-LCD) is a flat panel display one of the most important fields, because of its many advantages, is the only display technology
More informationInventions on Three Dimensional GUI- A TRIZ based analysis
From the SelectedWorks of Umakant Mishra October, 2008 Inventions on Three Dimensional GUI- A TRIZ based analysis Umakant Mishra Available at: https://works.bepress.com/umakant_mishra/74/ Inventions on
More information10.5 Polarization of Light
10.5 Polarization of Light Electromagnetic waves have electric and magnetic fields that are perpendicular to each other and to the direction of propagation. These fields can take many different directions
More informationVisual Rendering for VR. Stereo Graphics
Visual Rendering for VR Hsueh-Chien Chen, Derek Juba, and Amitabh Varshney Stereo Graphics Our left and right eyes see two views, which are processed by our visual cortex to create a sense of depth Computer
More informationGreatly enhanced visual detail and vividity. Accuracy based on mathematical derivation Disparity can function in isolation (RDS)
Rob Black Greatly enhanced visual detail and vividity. Accuracy based on mathematical derivation Disparity can function in isolation (RDS) But, in conventional and recent 3D glasses systems it s advantages
More informationMITOCW MIT6_172_F10_lec18_300k-mp4
MITOCW MIT6_172_F10_lec18_300k-mp4 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for
More informationRobert Collins CSE486, Penn State Lecture 08: Introduction to Stereo
Lecture 08: Introduction to Stereo Reading: T&V Section 7.1 Stereo Vision Inferring depth from images taken at the same time by two or more cameras. Basic Perspective Projection Scene Point Perspective
More informationFriday, 22 August 14. Lenticular prints
Lenticular prints Motivation Giving researchers the ability to present work in 3D in print and without glasses (autostereoscopic). Avoiding the need to take display hardware to conferences for poster sessions.
More information25-1 Interference from Two Sources
25-1 Interference from Two Sources In this chapter, our focus will be on the wave behavior of light, and on how two or more light waves interfere. However, the same concepts apply to sound waves, and other
More informationChapter 12 Notes: Optics
Chapter 12 Notes: Optics How can the paths traveled by light rays be rearranged in order to form images? In this chapter we will consider just one form of electromagnetic wave: visible light. We will be
More informationHidden Surface Removal
Outline Introduction Hidden Surface Removal Hidden Surface Removal Simone Gasparini gasparini@elet.polimi.it Back face culling Depth sort Z-buffer Introduction Graphics pipeline Introduction Modeling Geom
More informationMulti-View Omni-Directional Imaging
Multi-View Omni-Directional Imaging Tuesday, December 19, 2000 Moshe Ben-Ezra, Shmuel Peleg Abstract This paper describes a novel camera design or the creation o multiple panoramic images, such that each
More informationGlass Gambit: Chess set and shader presets for DAZ Studio
Glass Gambit: Chess set and shader presets for DAZ Studio This product includes a beautiful glass chess set, 70 faceted glass shader presets and a 360 degree prop with 5 material files. Some people find
More informationHikvision DarkFighter Technology
WHITE PAPER Hikvision DarkFighter Technology Stunning color video in near darkness 2 Contents 1. Background... 3 2. Key Technologies... 3 2.1 DarkFighter Night Vision Sensor... 3 2.2 Darkeye Lens... 4
More informationRV - AULA 07 - PSI3502/2018. Displays
RV - AULA 07 - PSI3502/2018 Displays Outline Discuss various types of output devices, also known as displays. Examine the video displays as one of the most widely used and most diverse group of displays.
More informationHere s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and
1 Here s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and the light. 2 To visualize this problem, consider the
More informationVIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN
VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN WHAT IS A LIGHT FIELD? Light field seems to have turned into a catch-all term for many advanced camera/display technologies. WHAT IS A LIGHT FIELD?
More informationIntensity Pro 4K Incredible quality capture and playback in SD, HD and Ultra HD for your HDMI, YUV, S-Video and NTSC/PAL devices!
Intensity Pro 4K Incredible quality capture and playback in SD, HD and Ultra HD for your HDMI, YUV, S-Video and NTSC/PAL devices! Introducing the new Intensity Pro 4K, the easiest and highest quality way
More informationEBU TECHNOLOGY AND DEVELOPMENT. The EBU and 3D. Dr Hans Hoffmann. Dr David Wood. Deputy Director. Programme Manager
EBU TECHNOLOGY AND DEVELOPMENT The EBU and 3D - What are we doing - Dr David Wood Deputy Director Dr Hans Hoffmann Programme Manager Is it the beer or the 3D that s giving me a headache? It is very easy
More informationEnhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System
Journal of Image and Graphics, Volume 2, No.2, December 2014 Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System M. G. Eljdid Computer Sciences Department, Faculty of
More informationComputational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2010 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application Introduction Pinhole camera
More information