BINOCULAR DISPARITY AND DEPTH CUE OF LUMINANCE CONTRAST. NAN-CHING TAI National Taipei University of Technology, Taipei, Taiwan
|
|
- Kelly Thompson
- 5 years ago
- Views:
Transcription
1 N. Gu, S. Watanabe, H. Erhan, M. Hank Haeusler, W. Huang, R. Sosa (eds.), Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer- Aided Architectural Design Research in Asia CAADRIA 2014, , The Association for Computer-Aided Architectural Design Research in Asia (CAADRIA), Hong Kong BINOCULAR DISPARITY AND DEPTH CUE OF LUMINANCE CONTRAST Investigation of Perceptual Influence of Binocular Disparity on Depth Effect of Luminance Contrast through Stereo Display NAN-CHING TAI National Taipei University of Technology, Taipei, Taiwan 1. Introduction Abstract. Luminance contrast has been identified as an effective depth cue through perceptual studies using digital images generated by the integrated technologies of physically based lighting simulation and perceptually based tone mapping. However, the prior established framework utilizes a single camera viewpoint, failing to address the binocular vision of the human visual system. In this study, the computational framework is extended to incorporate 3-dimensional (3D) stereo display technology. Psychophysical experiments were conducted to investigate the depth effect of luminance contrast on the experimental scenes presented on conventional and stereo displays. The objective of this study was twofold: first, to investigate the effect of luminance contrast on depth perception, considering binocular vision; second, to further advance the visual realism of the computergenerated environment to reflect the perceptual reality of static pictorial and binocular disparity cues. Keywords. High dynamic range imagery; luminance contrast; binocular disparity; stereo display; depth perception. Contrast has proven to be an effective cue for creating illusory depth on a planar surface (O Shea et al., 1994). To utilize luminance contrast as a design parameter to enrich the spatial experience, Tai (2012) developed a computer-generated pictorial environment that can reflect perceptual reality to investigate and envision the effect of luminance contrast in three-
2 638 N. TAI dimensional space. The effect of light, architectural configuration, and depth perception of a visual target in an architectural scene was investigated, and luminance contrast was identified as an effective depth cue. A visual target that has a higher contrast against a foreground than against a background will appear to be deeper in space than its actual location (Tai and Inanici, 2012; Tai, 2013). Experimental scenes in those studies were generated by the physically based lighting simulation program RADIANCE in High Dynamic Range (HDR) image format (Ward and Shakespeare, 1998). The accuracy of the images generated by RADIANCE has been validated (Mardaljevic, 2001; Ruppertsberg and Bloj, 2006). To display an HDR scene on a common display with a limited range, it can be tone-mapped into a low-dynamic range format such as JPG. The Photographic tone-mapping operator developed by Reinhard et al (2002) is one of the best performers in several perceptual aspects in many perceptual studies (Kuang et al., 2007; Cadík et al., 2008). The prior studies have established a computational framework to generate a digital pictorial environment that can reflect the perceptual reality in terms of how light is distributed in an architectural scene, and how it is perceived by the human visual system. The final image from the previously established computational framework is output by a single camera viewpoint. Therefore, the visual realism it offers can at best match monocular vision, failing to address the binocular vision of the human visual system. In this study, a stereo display technology is incorporated into the established framework. Perceptual studies on the depth effect of luminance contrast were conducted in a computer-generated environment. The objective is to investigate the effect of luminance contrast on depth perception considering binocular vision and to further advance the visual realism of the computer-generated environment to reflect the perceptual reality of static pictorial and binocular disparity cues. 2. Binocular Disparity and Depth Perception Depth perception relies on depth cues. Depth cues can be generally categorized as binocular cues, including convergence and binocular disparity, and monocular cues, including kinetic and pictorial cues (Palmer, 1999; Solso, 2003). Convergence, a physiological cue, refers to the angle of convergence of the feedback from the two eyes that reflect the distance of the object on which they are focused. A kinetic cue is dynamic visual information resulting from the relative change of object location in a spatial layout due to the motion of the observer. It can inform us of both the static and dynamic spatial relationships of a larger-scale context. Pictorial cues, on the other hand, refer to the collective depth cues such as occlusion, relative size, linear per-
3 BINOCULAR DISPARITY AND DEPTH CUE 639 spective, and aerial perspective that are the visual information that can be observed from a real scene and can to be applied to create illusory depth in a picture (Wanger et al., 1992; Palmer, 1999). Pictorial cues are thus essential for creating a perceptually realistic, pictorial environment on planar media. In binocular disparity, it is the difference of the two retinal images that provides the visual information of the spatial layout and creates the stereo visual experience (Wanger et al., 1992; Palmer, 1999). Binocular cues can be considered to be two sets of slightly different pictorial cues. Visual perception results from how the visual system responds to the light reflected from a three-dimensional environment. Therefore, the previously established computational framework that integrates the physically based lighting simulation and perceptually based tone mapping can provide an image that encompass the pictorial cues to match the real scene. To incorporate binocular disparity, the framework was expanded to generate a set of two images, one for the left eye and one for the right eye, and then display these specifically to each eye to create the stereo visual experience. Techniques and technologies for displaying stereo images on planar media have been developed for some time. In general, they can be categorized as stereoscopic and autostereoscopic displaying technologies. Stereoscopic display technology requires viewers to wear special glasses; the source image is distinguished by the glasses through different techniques such as shutters, circular polarization, or simply filtered color. Autostereoscopic displays rely on the display device to project separate images specifically to the left and right eye. They often utilize sensors to detect the viewer s position and use various techniques such as lenticular lenses or parallax barriers (Lueder, 2012). In this study, both methods are employed to create the stereo visual representation of the experiment scenes. Anaglyph 3D images are composed of two different color-filtered images and can be delivered to each eye using Anaglyph 3D glasses. For autostereoscopic display, a TOSHIBA Satellite P850 laptop was used. As the stereo viewing mode of the TOSHIBA Satellite P850 can be toggled on and off to display the JPEG Stereoscopic (JPS, extension.jps) images and Anaglyph 3D images in JPG format, it was used to display the experiment scenes for this study. 3. Experiments This study adopts a previously established experiment design (Tai, 2013). A hallway space is composed of four 6 m 6 m 4 m modules. At the center of the 6 m 6 m space, the ceiling has a 2 m 2 m skylight. Each skylight can be open, half open, or closed to control the luminance distribution of the interior. A camera (M) is placed at one end of the hallway, located 1.5 m
4 640 N. TAI above the ground, focusing on the center of the visual target. The visual target is a red sphere with a radius of 30 cm, floating 1.6 m above the ground. The initial location of the visual target is 15 m from the viewpoint. Two more cameras (L) and (R), each of which is shifted 3 cm left and right from the first camera (M), are set up to create the stereo image. The skylights are controlled in two different manners, as illustrated in Figure 1, to create two different luminance distributions for the experimental scenes. In the F=B, the skylights are all opened. The luminance contrast of the visual target against the foreground is thus equal to the luminance contrast against the background. In the F>B, the skylights are half open, open, closed, and half open, respectively, to cause the luminance contrast of the visual target against the foreground to be greater than that against the background. Figure 1. Configurations of skylights to create two different luminance distributions for experimental scenes. Experimental scenes were rendered using RADIANCE, with all parameters constant. The HDR scenes were further tone-mapped by Photographic tone-mapping operator to generate the images in JPG format. The JPG format image output from camera (M) was used as the experimental scene set of a single camera. The JPG format images output by cameras (L) and (R) were processed in an image-editing program to produce an experimental scene set of Anaglyph 3D and JPG Stereoscopic (JPS).
5 BINOCULAR DISPARITY AND DEPTH CUE EXPERIMENT DESIGN The Method of Constant Stimuli was employed to measure the perceived distance of the visual target in the experimental scenes. Each of the two test scenes (visual targets located at 15 m under F=B and F>B lighting s) was paired with one of the seven comparison scenes (visual targets located at seven different locations ranging from m under F=B lighting ) to present to the subjects. The subjects were required to judge what visual target appeared to be closer and verbally report this to the researcher. Each combination of test scene and comparison scene was presented ten times to the subject. That is, each subject was required to make a perceptual judgment of the 15 m visual target, in F>B and F=B s, against the same visual target located at 12, 13, 14, 15, 16, 17, and 18 m in the F=B, respectively, ten times in a random order. The procedure was repeated four times. In the first of the four tests, the subjects used one eye to view the experimental scenes of the single camera set (Monocular 2D); in the second, the subjects used both eyes to view the experimental scenes of the same single camera set (Binocular 2D); in the third trial, the subjects used both eyes to view the Anaglyph 3D set (Anaglyph 3D); and finally, the subjects used both eyes to view the JPG Stereoscopic set (JPS 3D). Ten subjects participated in the experiment. Subjects were 20 to 41 years old with normal or corrected to normal vision. Experiments were performed in a research lab that used electric lighting only to ensure a stable lighting environment from trial to trial. Experiment scenes were presented on the TOSHIBA Satellite P850 laptop that was capable of displaying all the sets of the experiment scenes RESULTS The Method of Constant Stimuli requires subjects to make only binary judgments, and allow more intuitive responses (Gescheider, 1984). Experiment results were analyzed using a Probit analysis model (Finney, 1971). Figures 2 illustrate the Probit analysis results for the same experiments performed with four different experiment scene sets. In the Probit analysis, the x-axis represents the actual locations of the visual target in the comparison scenes, the y-axis represents the probability that the subjects reported that the test target was perceived to be closer. The intersection point of the 0.5 line and the Probit analysis curve is the Point of Subjective Equality (PSE), representing when the test and comparison targets are perceived to be equal in depth. Thus, the PSE can be considered as the measured perceived distance of the test target under different luminance contrast s. In Fig-
6 642 N. TAI ures 2, A, B, C, and D represent the PSEs for the F=B, and A, B, C, and D represent the PSEs for the F>B, respectively, for experiments performed with the scene sets of Monocular 2D, Binocular 2D, Anaglyph 3D, and JPS 3D. Figure 2. (a) Probit analysis for experiment results performed with experiment scene set of Monocular 2D, A is PSE for F=B, A is PSE for F>B ; (b) Probit analysis for experiment results performed with experiment scene set of Binocular 2D, B is PSE for F=B, B is PSE for F>B ; (c) Probit analysis for experiment results performed with experiment scene set of Anaglyph 3D, C is PSE for F=B, C is PSE for F>B ; (d) Probit analysis for experiment results performed with experiment scene set of JPS 3D, D is PSE for F=B, D is PSE for F>B. 4. Discussion Figure 3(a) illustrates the PSEs for the F=B for four different sets of experiment scenes. The luminance distribution of the test scene is identical to the comparison scenes, thus the measured perceived distances of the
7 BINOCULAR DISPARITY AND DEPTH CUE 643 visual targets were all measured close to the actual location of 15 m, specifically, ± 0.077, ± 0.066, ± 0.069, and ± m, respectively, for the experiment scene sets Monocular 2D, Binocular 2D, Anaglyph 3D, and JPS 3D. Conversely, in the F>B as illustrated in Figure 3(b), when the luminance contrast of the test target against the foreground is greater than it is against the background, the measured perceived distances of the test visual targets all increased, specifically ± 0.081, ± 0.072, ± 0.069, and ± m, respectively, for the experiment scenes sets Monocular 2D, Binocular 2D, Anaglyph 3D, and JPS 3D. Figure 3. (a) PSEs of experiment results performed with four experiment scene sets in F=B. A is for Monocular 2D, B is for Binocular 2D, C is for Anaglyph 3D, and D is for JPS 3D set; (b) PSEs of experiment results performed with four experiment scene sets in F>B. A is for Monocular 2D, B is for Binocular 2D, C is for Anaglyph 3D, and D is for JPS 3D set. The primary question asked in this study is whether binocular disparity would affect the previously established results of luminance contrast as an effective depth cue. The experiment design adopted from the previously established studies, that is, the experiment performed with the Binocular 2D set, is considered as an identical experiment to one of the s in the previous study (Tai, 2013). Table 1 compares the experiment results. In both experiments, the measured perceived distance of the visual target in the F>B increased, 8.17% and 9.31%, against its measured perceived distance in the F=B. Therefore, this study validates the effect of luminance contrast in increasing the perceived distance of a visual target using a perceptually realistic, pictorial environment without the consideration of binocular disparity.
8 644 N. TAI Table 1. Comparison of experiment results of Binocular 2D with the same s from the previous study. Binocular 2D Previous study of the same Measured perceived distance of the visual target in F=B Measured perceived distance of the visual target in F>B ± m ± m 8.17 % ± m ± m 9.31 % % Increase of perceived distance Table 2. Comparison of increased percentage of measured perceived distance of visual targets under different displaying s. The first and second rounds of the experiment asked subjects to view the same set of experiment scenes output by the single camera setting using monocular and binocular vision. The perceived distances of the visual targets increased 9.34% and 8.17%, respectively. In both the Anaglyph 3D and JPS 3D sets, in which the scene being displayed incorporated the binocular disparity cue, the luminance contrast continued to influence the perceptual judgment of the visual target s location in the experiment scene. The perceived distances of visual targets increased 9.09% and 6.02%, respectively, for the Anaglyph 3D and JPS 3D sets. However, the comparison of the viewing s between the Binocular 2D and JPS 3D, suggests that the additional depth cue of the binocular disparity incorporated in the simulated three-dimensional scene may help viewers realize the true distance perception, thus reducing the effect of the luminance contrast on affecting the object s perceived distance in a scene. Table 2 illustrates the comparison of the increased percentage of the measured perceived distances of the visual targets in the F>B to the F=B for experiments performed under four different display s. The percentage is significantly smaller for the JPS 3D display. Measured perceived distance Measured perceived distance % Increase of of the visual target in F=B of the visual target in F>B perceived distance Monocular 2D ± m ± m 9.34 % Binocular 2D ± m ± m 8.17 % Anaglyph 3D ± m ± m 9.09 % JPS 3D ± m ± m 6.02 %
9 BINOCULAR DISPARITY AND DEPTH CUE 645 Table 3 compares the D-Thresholds for the experiments performed under the four display s. The D-Threshold is the average of the upper D- Threshold and the lower D-Threshold. Each represents the distance on the x- axis between the intersection of 0.75 and 0.25 proportion lines to the PSE in the Probit analysis function (Gescheider, 1984). The smaller the D- Threshold is, the steeper the Probit analysis curve is, meaning a smaller range of error for the PSE. As indicated in Table 3, the D-Threshold decreases from the viewing of Monocular 2D to Binocular 2D. This suggests that the subjects can make a more determined judgment on the same scene using two eyes rather than one eye. For the two types of stereo display, the D-Threshold for the Anaglyph 3D is also smaller than the other two non-stereo display s (except the D-Threshold for F=B for Binocular 2D). However, the D-Threshold for the JPS 3D is significantly the smallest among the four types of viewing s. Therefore, this study concludes that the autostereoscopic display of JPS 3D experimental scenes can offer a pictorial environment allowing a more determined perceptual judgment on the depth effect resulting from luminance contrast. Table 3. Comparison of D-Thresholds for four different viewing s. D-Threshold for F=B D-Threshold for F>B Monocular 2D ± ± Binocular 2D ± ± Anaglyph 3D ± ± JPS 3D ± ± Conclusion There were two objectives in this study. The first was to investigate the influence of binocular disparity on the depth effect of luminance contrast. The second was to advance the visual realism of the computer-generated pictorial environment for studying and envisioning the effect of luminance contrast on depth perception. Based on the results, it is concluded that incorporation of binocular disparity can advance the visual realism of the computergenerated pictorial environment. A computational framework that incorporates physically based lighting simulation, perceptually based tone mapping, and autostereoscopic display technology can generate a pictorial environment that allows a more pronounced perceptual judgment of the depth effect resulting from luminance contrast. In addition, although the effect decreases
10 646 N. TAI somewhat, luminance contrast remains an effective depth cue that can affect the perceptual judgment of the perceived distance of an object in this computer-generated stereo pictorial environment. Acknowledgements The author wishes to express his appreciation to the people who participated in the experiments. The author also wishes to extend his sincere appreciation to the National Science Council in Taiwan. This research was funded by the National Science Council under grant No: NSC E References Cadík, M.; Wimmer, M.; Neumann, L. and Artusi, A.: 2008, Evaluation of HDR tone mapping methods using essential perceptual attributes. Computers & graphics, 32(3), Finney, D.: 1971, Probit analysis, 3rd ed., University Press, Cambridge. Gescheider, G. A.: 1984, Psychophysics: method, theory, and application, 2nd ed., Lawrence Erlbaum. Kuang, J.; Yamaguchi, H.; Liu, C.; Johnson, G. M. and Fairchild, M. D.: 2007, Evaluating HDR rendering algorithms, ACM transactions on applied perception. 4(2), article No. 9. Lueder, E.: 2012, 3D displays, Wiley, Hoboken, N.J. Mardaljevic, J.: 2001, The BRE-IDMP dataset: a new benchmark for the validation of illuminance prediction techniques, Lighting research and technology, 33(2), O Shea, R. P.; Blackburn, S. G. and Ono, H.: 1994, Contrast as a depth cue, Vision research, 34(12), Palmer, S. E.: 1999, Vision science: photons to phenomenology, 1st ed., The MIT Press, Cambridge, Massachusetts. Reinhard, E.; Stark, M.; Shirley, P. and Ferwerda, J.: 2002, Photographic tone reproduction for digital images, ACM transactions on graphics, 21(3), Ruppertsberg, A. I. and Bloj, M.: 2006, Rendering complex scenes for psychophysics using RADIANCE: How accurate can you get? Journal of the optical society of America A, 23(4), Solso, R.: 2003, The psychology of art and the evolution of the conscious brain. MIT Press, Cambridge, Massachusetts. Tai, N.-C.: 2012, Space perception in real and pictorial spaces: investigation of size-related and tone-related pictorial depth cues through computer simulations, Computer-aided design and applications, 9(2), Tai, N.-C.: 2013, Application of luminance contrast in architectural design. Computer-aided design and applications, 10(6), Tai, N.-C. and Inanici, M.: 2012, Luminance contrast as depth cue: investigation and design applications. Computer-aided design and applications, 9(5), Wanger, L. R.; Ferwerda, J. A. and Greenberg, D. P.: 1992, Perceiving spatial relationships in computer-generated images, IEEE Computer Graphics and Applications, 12(3), Ward, G. and Shakespeare, R.: 1998, Rendering with RADIANCE: the art and science of lighting visualization, Morgan Kaufmann Publishers.
Basic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues
Distance, depth, and 3D shape cues Pictorial depth cues: familiar size, relative size, brightness, occlusion, shading and shadows, aerial/ atmospheric perspective, linear perspective, height within image,
More informationNatural Viewing 3D Display
We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,
More informationStereovision. Binocular disparity
Stereovision Binocular disparity Retinal correspondence Uncrossed disparity Horoptor Crossed disparity Horoptor, crossed and uncrossed disparity Wheatsteone stereoscope (c. 1838) Red-green anaglyph How
More informationProf. Feng Liu. Spring /27/2014
Prof. Feng Liu Spring 2014 http://www.cs.pdx.edu/~fliu/courses/cs510/ 05/27/2014 Last Time Video Stabilization 2 Today Stereoscopic 3D Human depth perception 3D displays 3 Stereoscopic media Digital Visual
More informationMultidimensional image retargeting
Multidimensional image retargeting 9:00: Introduction 9:10: Dynamic range retargeting Tone mapping Apparent contrast and brightness enhancement 10:45: Break 11:00: Color retargeting 11:30: LDR to HDR 12:20:
More informationMahdi Amiri. May Sharif University of Technology
Course Presentation Multimedia Systems 3D Technologies Mahdi Amiri May 2014 Sharif University of Technology Binocular Vision (Two Eyes) Advantages A spare eye in case one is damaged. A wider field of view
More informationCS 563 Advanced Topics in Computer Graphics Stereoscopy. by Sam Song
CS 563 Advanced Topics in Computer Graphics Stereoscopy by Sam Song Stereoscopy Introduction Parallax Camera Displaying and Viewing Results Stereoscopy What is it? seeing in three dimensions creates the
More informationRealtime 3D Computer Graphics Virtual Reality
Realtime 3D Computer Graphics Virtual Reality Human Visual Perception The human visual system 2 eyes Optic nerve: 1.5 million fibers per eye (each fiber is the axon from a neuron) 125 million rods (achromatic
More informationlecture 10 - depth from blur, binocular stereo
This lecture carries forward some of the topics from early in the course, namely defocus blur and binocular disparity. The main emphasis here will be on the information these cues carry about depth, rather
More informationStereoscopic Systems Part 1
Stereoscopic Systems Part 1 Terminology: Stereoscopic vs. 3D 3D Animation refers to computer animation created with programs (like Maya) that manipulate objects in a 3D space, though the rendered image
More informationA Qualitative Analysis of 3D Display Technology
A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract
More informationRobert Collins CSE486, Penn State Lecture 08: Introduction to Stereo
Lecture 08: Introduction to Stereo Reading: T&V Section 7.1 Stereo Vision Inferring depth from images taken at the same time by two or more cameras. Basic Perspective Projection Scene Point Perspective
More informationPerceptual Quality Improvement of Stereoscopic Images
Perceptual Quality Improvement of Stereoscopic Images Jong In Gil and Manbae Kim Dept. of Computer and Communications Engineering Kangwon National University Chunchon, Republic of Korea, 200-701 E-mail:
More informationConvergence Point Adjustment Methods for Minimizing Visual Discomfort Due to a Stereoscopic Camera
J. lnf. Commun. Converg. Eng. 1(4): 46-51, Dec. 014 Regular paper Convergence Point Adjustment Methods for Minimizing Visual Discomfort Due to a Stereoscopic Camera Jong-Soo Ha 1, Dae-Woong Kim, and Dong
More informationPERCEIVING DEPTH AND SIZE
PERCEIVING DEPTH AND SIZE DEPTH Cue Approach Identifies information on the retina Correlates it with the depth of the scene Different cues Previous knowledge Slide 3 Depth Cues Oculomotor Monocular Binocular
More informationVideo Communication Ecosystems. Research Challenges for Immersive. over Future Internet. Converged Networks & Services (CONES) Research Group
Research Challenges for Immersive Video Communication Ecosystems over Future Internet Tasos Dagiuklas, Ph.D., SMIEEE Assistant Professor Converged Networks & Services (CONES) Research Group Hellenic Open
More informationBinocular cues to depth PSY 310 Greg Francis. Lecture 21. Depth perception
Binocular cues to depth PSY 310 Greg Francis Lecture 21 How to find the hidden word. Depth perception You can see depth in static images with just one eye (monocular) Pictorial cues However, motion and
More informationCrosstalk reduces the amount of depth seen in 3D images of natural scenes
Crosstalk reduces the amount of depth seen in 3D images of natural scenes Inna Tsirlin *, Robert S. Allison and Laurie M. Wilcox Centre for Vision Research, York University, 4700 Keele st., Toronto, ON,
More informationEVOLVING LEGO. Exploring the impact of alternative encodings on the performance of evolutionary algorithms. 1. Introduction
N. Gu, S. Watanabe, H. Erhan, M. Hank Haeusler, W. Huang, R. Sosa (eds.), Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer- Aided
More informationSTEREOSCOPIC IMAGE PROCESSING
STEREOSCOPIC IMAGE PROCESSING Reginald L. Lagendijk, Ruggero E.H. Franich 1 and Emile A. Hendriks 2 Delft University of Technology Department of Electrical Engineering 4 Mekelweg, 2628 CD Delft, The Netherlands
More informationVictor S. Grinberg M. W. Siegel. Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA, 15213
Geometry of binocular imaging III : Wide-Angle and Fish-Eye Lenses Victor S. Grinberg M. W. Siegel Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh,
More information3D Autostereoscopic Display Image Generation Framework using Direct Light Field Rendering
3D Autostereoscopic Display Image Generation Framework using Direct Light Field Rendering Young Ju Jeong, Yang Ho Cho, Hyoseok Hwang, Hyun Sung Chang, Dongkyung Nam, and C. -C Jay Kuo; Samsung Advanced
More informationStereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo
Stereo Viewing Stereo Stereograms Autostereograms Depth from Stereo 3D (Depth) from 2D 3D information is lost by projection. How do we recover 3D information? Image 3D Model Depth Cues Shadows: Occlusions:
More informationDepartment of Photonics, NCTU, Hsinchu 300, Taiwan. Applied Electromagnetic Res. Inst., NICT, Koganei, Tokyo, Japan
A Calibrating Method for Projected-Type Auto-Stereoscopic 3D Display System with DDHOE Ping-Yen Chou 1, Ryutaro Oi 2, Koki Wakunami 2, Kenji Yamamoto 2, Yasuyuki Ichihashi 2, Makoto Okui 2, Jackin Boaz
More informationUsing surface markings to enhance accuracy and stability of object perception in graphic displays
Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information
More informationA SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS
A SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS Enrico Giora and Clara Casco Department of General Psychology, University of Padua, Italy Abstract Edge-based energy models
More informationGreatly enhanced visual detail and vividity. Accuracy based on mathematical derivation Disparity can function in isolation (RDS)
Rob Black Greatly enhanced visual detail and vividity. Accuracy based on mathematical derivation Disparity can function in isolation (RDS) But, in conventional and recent 3D glasses systems it s advantages
More informationThink-Pair-Share. What visual or physiological cues help us to perceive 3D shape and depth?
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth? [Figure from Prados & Faugeras 2006] Shading Focus/defocus Images from same point of view, different camera parameters
More informationPerception, Part 2 Gleitman et al. (2011), Chapter 5
Perception, Part 2 Gleitman et al. (2011), Chapter 5 Mike D Zmura Department of Cognitive Sciences, UCI Psych 9A / Psy Beh 11A February 27, 2014 T. M. D'Zmura 1 Visual Reconstruction of a Three-Dimensional
More informationCOMS W4172 Perception, Displays, and Devices 3
COMS W4172 Perception, Displays, and Devices 3 Steven Feiner Department of Computer Science Columbia University New York, NY 10027 www.cs.columbia.edu/graphics/courses/csw4172 February 20, 2018 1 What
More informationUSE R/G GLASSES. Binocular Combina0on of Images. What can happen when images are combined by the two eyes?
Binocular Combina0on of Images 3D D USE R/G GLASSES What can happen when images are combined by the two eyes? Fusion- when images are similar and disparity is small Monocular Suppression / Rivalry- when
More informationFunctional Difference Predictors (FDPs): Measuring Meaningful Image Differences
Functional Difference Predictors (FDPs): Measuring Meaningful Image Differences James A. Ferwerda Program of Computer Graphics Cornell University Fabio Pellacini Pixar Animation Studios Emeryville, CA
More informationSelective rendering for efficient ray traced stereoscopic images
Vis Comput (2010) 26: 97 107 DOI 10.1007/s00371-009-0379-4 ORIGINAL ARTICLE Selective rendering for efficient ray traced stereoscopic images Cheng-Hung Lo Chih-Hsing Chu Kurt Debattista Alan Chalmers Published
More informationVS 117 LABORATORY III: FIXATION DISPARITY INTRODUCTION
VS 117 LABORATORY III: FIXATION DISPARITY INTRODUCTION Under binocular viewing, subjects generally do not gaze directly at the visual target. Fixation disparity is defined as the difference between the
More informationDifferential Processing of Facial Motion
Differential Processing of Facial Motion Tamara L. Watson 1, Alan Johnston 1, Harold C.H Hill 2 and Nikolaus Troje 3 1 Department of Psychology, University College London, Gower Street, London WC1E 6BT
More informationExtended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses
Proceedings of the 2 nd World Congress on Electrical Engineering and Computer Systems and Science (EECSS'16) Budapest, Hungary August 16 17, 2016 Paper No. MHCI 112 DOI: 10.11159/mhci16.112 Extended Fractional
More informationComparison of Accommodation and Convergence by Simultaneous Measurements during 2D and 3D Vision Gaze
Comparison of Accommodation and Convergence by Simultaneous Measurements during 2D and 3D Vision Gaze Hiroki Hori 1, Tomoki Shiomi 1, Tetsuya Kanda 1, Akira Hasegawa 1, Hiromu Ishio 1, Yasuyuki Matsuura
More informationA comparison of perceived lighting characteristics in simulations versus real-life setup
A comparison of perceived lighting characteristics in simulations versus real-life setup B. Salters*, P. Seuntiens Philips Research Eindhoven, High Tech Campus 34, Eindhoven, The Netherlands ABSTRACT Keywords:
More information3D Display and AR. Linda Li, Rokid Rlab Dec. 27, 2016
3D Display and AR Linda Li, Rokid Rlab Dec. 27, 2016 1 Five OLED Trends of Future Tactile / Haptic Touch Displays Higher Pixel Density Glasses-free 3D Virtue Reality and Augmented Reality 1 Stereoscopic
More informationMulti-projector-type immersive light field display
Multi-projector-type immersive light field display Qing Zhong ( é) 1, Beishi Chen (í ì) 1, Haifeng Li (Ó ô) 1, Xu Liu ( Ê) 1, Jun Xia ( ) 2, Baoping Wang ( ) 2, and Haisong Xu (Å Ø) 1 1 State Key Laboratory
More informationPerceived 3D metric (or Euclidean) shape is merely ambiguous, not systematically distorted
Exp Brain Res (2013) 224:551 555 DOI 10.1007/s00221-012-3334-y RESEARCH ARTICLE Perceived 3D metric (or Euclidean) shape is merely ambiguous, not systematically distorted Young Lim Lee Mats Lind Geoffrey
More informationORDINAL JUDGMENTS OF DEPTH IN MONOCULARLY- AND STEREOSCOPICALLY-VIEWED PHOTOGRAPHS OF COMPLEX NATURAL SCENES
ORDINAL JUDGMENTS OF DEPTH IN MONOCULARLY- AND STEREOSCOPICALLY-VIEWED PHOTOGRAPHS OF COMPLEX NATURAL SCENES Rebecca L. Hornsey, Paul B. Hibbard and Peter Scarfe 2 Department of Psychology, University
More informationMobile 3D Display Technology to Realize Natural 3D Images
3D Display 3D Image Mobile Device Special Articles on User Interface Research New Interface Design of Mobile Phones 1. Introduction Nowadays, as a new method of cinematic expression continuing from the
More informationABSTRACT Purpose. Methods. Results.
ABSTRACT Purpose. Is there a difference in stereoacuity between distance and near? Previous studies produced conflicting results. We compared distance and near stereoacuities using identical presentation
More informationHAMED SARBOLANDI SIMULTANEOUS 2D AND 3D VIDEO RENDERING Master s thesis
HAMED SARBOLANDI SIMULTANEOUS 2D AND 3D VIDEO RENDERING Master s thesis Examiners: Professor Moncef Gabbouj M.Sc. Payman Aflaki Professor Lauri Sydanheimo Examiners and topic approved by the Faculty Council
More informationPerception of Surfaces from Line Drawings
Perception of Surfaces from Line Drawings CHRISTOPH HOFFMANN 1, ZYGMUNT PIZLO 2, VOICU POPESCU 1, STEVE PRICE 1 1 Computer Sciences, 2 Psychological Sciences, Purdue University We test the perception of
More informationQUALITY, QUANTITY AND PRECISION OF DEPTH PERCEPTION IN STEREOSCOPIC DISPLAYS
QUALITY, QUANTITY AND PRECISION OF DEPTH PERCEPTION IN STEREOSCOPIC DISPLAYS Alice E. Haines, Rebecca L. Hornsey and Paul B. Hibbard Department of Psychology, University of Essex, Wivenhoe Park, Colchester
More informationAdaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision
Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China
More informationMulti-View Geometry (Ch7 New book. Ch 10/11 old book)
Multi-View Geometry (Ch7 New book. Ch 10/11 old book) Guido Gerig CS-GY 6643, Spring 2016 gerig@nyu.edu Credits: M. Shah, UCF CAP5415, lecture 23 http://www.cs.ucf.edu/courses/cap6411/cap5415/, Trevor
More informationImportant concepts in binocular depth vision: Corresponding and non-corresponding points. Depth Perception 1. Depth Perception Part II
Depth Perception Part II Depth Perception 1 Binocular Cues to Depth Depth Information Oculomotor Visual Accomodation Convergence Binocular Monocular Static Cues Motion Parallax Perspective Size Interposition
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationWe are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors
We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,800 116,000 120M Open access books available International authors and editors Downloads Our
More informationReprint. from the Journal. of the SID
A 23-in. full-panel-resolution autostereoscopic LCD with a novel directional backlight system Akinori Hayashi (SID Member) Tomohiro Kometani Akira Sakai (SID Member) Hiroshi Ito Abstract An autostereoscopic
More informationConversion of 2D Image into 3D and Face Recognition Based Attendance System
Conversion of 2D Image into 3D and Face Recognition Based Attendance System Warsha Kandlikar, Toradmal Savita Laxman, Deshmukh Sonali Jagannath Scientist C, Electronics Design and Technology, NIELIT Aurangabad,
More informationTowards Measuring of Depth Perception from Monocular Shadow Technique with Application in a Classical Painting
Towards Measuring of Depth Perception from Monocular Shadow Technique with Application in a Classical Painting Wei Wen *, Siamak Khatibi Department of Communication Blekinge Tekniska Högskola, Karlskrona,
More informationView Synthesis for Multiview Video Compression
View Synthesis for Multiview Video Compression Emin Martinian, Alexander Behrens, Jun Xin, and Anthony Vetro email:{martinian,jxin,avetro}@merl.com, behrens@tnt.uni-hannover.de Mitsubishi Electric Research
More informationCSE 165: 3D User Interaction. Lecture #3: Displays
CSE 165: 3D User Interaction Lecture #3: Displays CSE 165 -Winter 2016 2 Announcements Homework Assignment #1 Due Friday at 2:00pm To be presented in CSE lab 220 Paper presentations Title/date due by entering
More informationLimitations of Projection Radiography. Stereoscopic Breast Imaging. Limitations of Projection Radiography. 3-D Breast Imaging Methods
Stereoscopic Breast Imaging Andrew D. A. Maidment, Ph.D. Chief, Physics Section Department of Radiology University of Pennsylvania Limitations of Projection Radiography Mammography is a projection imaging
More informationThe Absence of Depth Constancy in Contour Stereograms
Framingham State University Digital Commons at Framingham State University Psychology Faculty Publications Psychology Department 2001 The Absence of Depth Constancy in Contour Stereograms Dawn L. Vreven
More informationA SXGA 3D Display Processor with Reduced Rendering Data and Enhanced Precision
A SXGA 3D Display Processor with Reduced Rendering Data and Enhanced Precision Seok-Hoon Kim KAIST, Daejeon, Republic of Korea I. INTRODUCTION Recently, there has been tremendous progress in 3D graphics
More informationStereo Graphics. Visual Rendering for VR. Passive stereoscopic projection. Active stereoscopic projection. Vergence-Accommodation Conflict
Stereo Graphics Visual Rendering for VR Hsueh-Chien Chen, Derek Juba, and Amitabh Varshney Our left and right eyes see two views, which are processed by our visual cortex to create a sense of depth Computer
More informationInput Method Using Divergence Eye Movement
Input Method Using Divergence Eye Movement Shinya Kudo kudo@kaji-lab.jp Hiroyuki Okabe h.okabe@kaji-lab.jp Taku Hachisu JSPS Research Fellow hachisu@kaji-lab.jp Michi Sato JSPS Research Fellow michi@kaji-lab.jp
More informationRV - AULA 07 - PSI3502/2018. Displays
RV - AULA 07 - PSI3502/2018 Displays Outline Discuss various types of output devices, also known as displays. Examine the video displays as one of the most widely used and most diverse group of displays.
More informationComputational Photography: Real Time Plenoptic Rendering
Computational Photography: Real Time Plenoptic Rendering Andrew Lumsdaine, Georgi Chunev Indiana University Todor Georgiev Adobe Systems Who was at the Keynote Yesterday? 2 Overview Plenoptic cameras Rendering
More informationDEPTH PERCEPTION. Learning Objectives: 7/31/2018. Intro & Overview of DEPTH PERCEPTION** Speaker: Michael Patrick Coleman, COT, ABOC, & former CPOT
DEPTH PERCEPTION Speaker: Michael Patrick Coleman, COT, ABOC, & former CPOT Learning Objectives: Attendees will be able to 1. Explain what the primary cue to depth perception is (vs. monocular cues) 2.
More informationAutomatic 2D-to-3D Video Conversion Techniques for 3DTV
Automatic 2D-to-3D Video Conversion Techniques for 3DTV Dr. Lai-Man Po Email: eelmpo@cityu.edu.hk Department of Electronic Engineering City University of Hong Kong Date: 13 April 2010 Content Why 2D-to-3D
More informationChapter 7. Conclusions and Future Work
Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between
More informationDesign and Evaluation of a 3D Video System Based on H.264 View Coding Hari Kalva, Lakis Christodoulou, Liam M. Mayron, Oge Marques, and Borko Furht
Design and Evaluation of a 3D Video System Based on H.264 View Coding Hari Kalva, Lakis Christodoulou, Liam M. Mayron, Oge Marques, and Borko Furht Dept. of Computer Science and Engineering Florida Atlantic
More informationThe perception of surface orientation from multiple sources of optical information
Perception & Psychophysics 1995, 57 (5), 629 636 The perception of surface orientation from multiple sources of optical information J. FARLEY NORMAN, JAMES T. TODD, and FLIP PHILLIPS Ohio State University,
More informationMeet icam: A Next-Generation Color Appearance Model
Meet icam: A Next-Generation Color Appearance Model Why Are We Here? CIC X, 2002 Mark D. Fairchild & Garrett M. Johnson RIT Munsell Color Science Laboratory www.cis.rit.edu/mcsl Spatial, Temporal, & Image
More information3D Image Sensor based on Opto-Mechanical Filtering
3D Image Sensor based on Opto-Mechanical Filtering Barna Reskó 1,2, Dávid Herbay 3, Péter Korondi 3, Péter Baranyi 2 1 Budapest Tech 2 Computer and Automation Research Institute of the Hungarian Academy
More informationIntermediate view synthesis considering occluded and ambiguously referenced image regions 1. Carnegie Mellon University, Pittsburgh, PA 15213
1 Intermediate view synthesis considering occluded and ambiguously referenced image regions 1 Jeffrey S. McVeigh *, M. W. Siegel ** and Angel G. Jordan * * Department of Electrical and Computer Engineering
More informationScaling of Rendered Stereoscopic Scenes
University of West Bohemia in Pilsen Department of Computer Science and Engineering Univerzitni 8 30614 Pilsen Czech Republic Scaling of Rendered Stereoscopic Scenes Master Thesis Report Ricardo José Teixeira
More informationOPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT
OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT ANJIBABU POLEBOINA 1, M.A. SHAHID 2 Digital Electronics and Communication Systems (DECS) 1, Associate
More informationMulti-View Omni-Directional Imaging
Multi-View Omni-Directional Imaging Tuesday, December 19, 2000 Moshe Ben-Ezra, Shmuel Peleg Abstract This paper describes a novel camera design or the creation o multiple panoramic images, such that each
More informationApproaches to Visual Mappings
Approaches to Visual Mappings CMPT 467/767 Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Effectiveness of mappings Mapping to positional quantities Mapping to shape Mapping to color Mapping
More informationThe Appearance of Surfaces Specified by Motion Parallax and Binocular Disparity
THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1989,41A (4) 697-717 The Appearance of Surfaces Specified by Motion Parallax and Binocular Disparity Brian J. Rogers University of Oxford Thomas S. Collett
More informationReal-time Integral Photography Holographic Pyramid using a Game Engine
Real-time Integral Photography Holographic Pyramid using a Game Engine Shohei Anraku, Toshiaki Yamanouchi and Kazuhisa Yanaka Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa-ken,
More informationVisual Rendering for VR. Stereo Graphics
Visual Rendering for VR Hsueh-Chien Chen, Derek Juba, and Amitabh Varshney Stereo Graphics Our left and right eyes see two views, which are processed by our visual cortex to create a sense of depth Computer
More informationVirtual Reality ll. Visual Imaging in the Electronic Age. Donald P. Greenberg November 16, 2017 Lecture #22
Virtual Reality ll Visual Imaging in the Electronic Age Donald P. Greenberg November 16, 2017 Lecture #22 Fundamentals of Human Perception Retina, Rods & Cones, Physiology Receptive Fields Field of View
More informationEvolution of Impossible Objects
Evolution of Impossible Objects Kokichi Sugihara Meiji Institute for Advanced Study of Mathematical Sciences, Meiji University, 4-21-1 Nakano, Nakano-ku, Tokyo 164-8525, Japan http://www.isc.meiji.ac.jp/~kokichis/
More informationEffect of Contrast on the Quality of 3D Visual Perception
Effect of Contrast on the Quality of 3D Visual Perception Mahsa T. Pourazad TELUS Communications Company, Canada University of British Columbia, Canada pourazad@ece.ubc.ca Zicong Mai, Panos Nasiopoulos
More informationImage Based Lighting with Near Light Sources
Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some
More informationImage Based Lighting with Near Light Sources
Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some
More informationCSE 165: 3D User Interaction
CSE 165: 3D User Interaction Lecture #4: Displays Instructor: Jurgen Schulze, Ph.D. CSE 165 - Winter 2015 2 Announcements Homework Assignment #1 Due tomorrow at 1pm To be presented in CSE lab 220 Homework
More informationInteractive Inverted Perspective Rendering for Architectural Visualization
Interactive Inverted Perspective Rendering for Architectural Visualization Vinod Srinivasan Ozan Ozener Ergun Akleman 2005 June 20th 22nd Vienna University of Technology Vienna, Austria Visualization Sciences
More informationPSYCHOMETRIC ASSESSMENT OF STEREOSCOPIC HEAD-MOUNTED DISPLAYS
PSYCHOMETRIC ASSESSMENT OF STEREOSCOPIC HEAD-MOUNTED DISPLAYS Logan Williams 1, Charles Lloyd 2, James Gaska 1, Charles Bullock 1, and Marc Winterbottom 1 1 OBVA Laboratory, USAF School of Aerospace Medicine,
More informationDepth cue integration: stereopsis and image blur
Vision Research 40 (2000) 3501 3506 www.elsevier.com/locate/visres Depth cue integration: stereopsis and image blur George Mather *, David R.R. Smith Laboratory of Experimental Psychology, Biology School,
More information(12) Patent Application Publication (10) Pub. No.: US 2005/ A1
(19) United States (12) Patent Application Publication (10) Pub. No.: US 2005/0219694 A1 Vesely et al. US 20050219694A1 (43) Pub. Date: Oct. 6, 2005 (54) (76) (21) (22) (60) (51) HORIZONTAL PERSPECTIVE
More information(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology
lecture 23 (0, 1, 1) (0, 0, 0) (0, 0, 1) (0, 1, 1) (1, 1, 1) (1, 1, 0) (0, 1, 0) hue - which ''? saturation - how pure? luminance (value) - intensity What is light? What is? Light consists of electromagnetic
More informationComputational Aesthetics for Rendering Virtual Scenes on 3D Stereoscopic Displays
Computational Aesthetics for Rendering Virtual Scenes on 3D Stereoscopic Displays László SZIRMAY-KALOS, Pirkko OITTINEN, and Balázs TERÉKI Introduction Computer graphics builds virtual scenes that are
More information3D Video services. Marco Cagnazzo
3D Video services Marco Cagnazzo Part III: Advanced services Overview 3D Video systems History Acquisition Transmission (coding) Rendering Future services Super Hi Vision systems High speed cameras High
More informationAn Improved Image Resizing Approach with Protection of Main Objects
An Improved Image Resizing Approach with Protection of Main Objects Chin-Chen Chang National United University, Miaoli 360, Taiwan. *Corresponding Author: Chun-Ju Chen National United University, Miaoli
More informationA Simple Viewfinder for Stereoscopic Video Capture Systems
A Simple Viewfinder for Stereoscopic Video Capture Systems Cary Kornfeld Departement Informatik ETH Zürich CH 8092 Zürich, Switzerland Cary.Kornfeld@inf.ethz.ch Abstract The emergence of low cost digital
More informationVisual Pathways to the Brain
Visual Pathways to the Brain 1 Left half of visual field which is imaged on the right half of each retina is transmitted to right half of brain. Vice versa for right half of visual field. From each eye
More information3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm
3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm Tobias Ritschel Kaleigh Smith Matthias Ihrke Thorsten Grosch Karol Myszkowski Hans-Peter
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationView Synthesis for Multiview Video Compression
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com View Synthesis for Multiview Video Compression Emin Martinian, Alexander Behrens, Jun Xin, and Anthony Vetro TR2006-035 April 2006 Abstract
More informationPerceptual Effects in Real-time Tone Mapping
Perceptual Effects in Real-time Tone Mapping G. Krawczyk K. Myszkowski H.-P. Seidel Max-Planck-Institute für Informatik Saarbrücken, Germany SCCG 2005 High Dynamic Range (HDR) HDR Imaging Display of HDR
More informationThe Assumed Light Direction for Perceiving Shape from Shading
The Assumed Light Direction for Perceiving Shape from Shading James P. O Shea Vision Science University of California, Berkeley Martin S. Banks Vision Science University of California, Berkeley Maneesh
More information