Functional Difference Predictors (FDPs): Measuring Meaningful Image Differences

Size: px
Start display at page:

Download "Functional Difference Predictors (FDPs): Measuring Meaningful Image Differences"

Transcription

1 Functional Difference Predictors (FDPs): Measuring Meaningful Image Differences James A. Ferwerda Program of Computer Graphics Cornell University Fabio Pellacini Pixar Animation Studios Emeryville, CA a) Raytraced reflection b) Environment mapped reflection c) VDP result [2] Fig 1. Images rendered with different reflection algorithms and VDP map showing areas of visible difference. Abstract - In this paper we introduce Functional Difference Predictors (FDPs), a new class of perceptually-based image difference metrics that predict how image errors affect the ability to perform visual tasks using the images. To define the properties of FDPs, we conduct a psychophysical experiment that focuses on two visual tasks: spatial layout and material estimation. In the experiment we introduce errors in the positions and contrasts of objects reflected in glossy surfaces and ask subjects to make layout and material judgments. The results indicate that layout estimation depends only on positional errors in the reflections and material estimation depends only on contrast errors. These results suggest that in many task contexts, large visible image errors may be tolerated without loss in task performance, and that FDPs may be better predictors of the relationship between errors and performance than current Visible Difference Predictors (VDPs). INTRODUCTION Measuring the differences between images is a very important aspect of computer graphics, especially when comparing the performance of graphics rendering algorithms. In the past, two kinds of metrics have been used. Physical metrics [1], compare images in terms of the numerical differences between their pixel values. A new trend in computer graphics is to use perceptual metrics that are based on computational models of human vision [3,5]. These metrics, generally formulated as Visible Difference Predictors (VDPs), measure the probability that observers will be able to detect differences in pixel contrasts between images. VDPs have been widely used by graphics researchers to compare rendering algorithms and to determine when images produced by different algorithms will be visually indistinguishable from one another (see [2,7,11] for recent reviews). However when we look at images, we do not see pixels. Rather, we see objects with distinct shapes, sizes, locations, motions, and materials. We use the visual information provided by images to make judgments about the properties of these objects and to perform meaningful visual tasks [4]. Different rendering methods can affect this information in different ways. This is illustrated in Fig. 1. Fig. 1a shows a tabletop scene with a glossy teapot. The reflection in the teapot surface was rendered using a raytracing algorithm. Fig. 1b shows the same scene, but here the reflection was rendered using environment mapping, a fast but approximate technique that introduces projective errors in the reflection with respect to the ray-traced version. Running a VDP on these images produces Fig. 1c, where the probability that an observer will detect differences in the images is proportional to the grayscale values. The VDP correctly predicts that observers can see differences in the reflections on the teapots. However, while the images are visibly different, there are many respects in which they are also similar. For example, it seems clear that the two teapots are made out of the same material. If these images were used in an e- commerce application to show the finish on the teapot, they would be of equal fidelity with respect to that task since the appearance of the material is the same in both images. This simple example shows that while current perceptual metrics can predict whether two images will be visibly different, they do not predict whether these differences will be visually significant. We propose that in many applications, the most meaningful way to compare images is to determine if their differences affect the task the user is trying to perform. We will say that two images are functionally equivalent with respect to a task when the user s ability to perform the task is the same using either image. In the remainder of this paper, we will first define how images can be functionally equivalent or different; we will

2 then describe an experiment we conducted to define the properties of functional difference metrics for computer graphics. Inspired by the term VDP, we call these new metrics Functional Difference Predictors (FDPs). RELATED WORK Measuring how image differences affect a subject s ability to perform visual tasks has been widely studied in experimental psychology (see [9] for a review). Unfortunately, most of the image manipulations and tasks that have been explored are so reductionistic that the results of these studies are not directly applicable to the problem of developing functional difference metrics for computer graphics. However, the experimental methodologies developed do provide an essential foundation for this work. In the computer graphics literature itself, research in this area is just beginning. Watson et al. [16] and Rushmeier et al. [13] have studied the correlation between VDP measures and subjects ratings of shape in the context of geometric compression. Rademacher et al. [10] conducted an experiment to measure the perception of visual realism and its correlation with various visual cues. Finally, Wanger et al. [15] and Rodger and Browse [12] have explored how different visual cues affect subjects abilities to assess the spatial layout and shapes of objects in computer-rendered scenes. FUNCTIONAL DIFFERENCE PREDICTORS A Functional Difference Predictor (FDP) is an operator that takes two images as input and calculates whether differences in the images will affect a user s ability to perform a visual task. FDPs are formulated with respect to tasks, since they assess the fidelity of the visual information required for the task. Therefore there are potentially as many FDPs as there are classes of tasks [14]. With this in mind, it is interesting to note that VDPs are in fact a specific instance of FDPs where the task is detecting contrast differences between images. This means that the FDP framework we are developing is not in conflict with earlier work, but is rather a significant generalization of it. In order to quantify differences in user s abilities to perform visual tasks, we need to be able to measure these abilities. Different tasks might require different measures. For example in some tasks speed of performance might be important, while in others accuracy might be paramount. Since the FDP operator does not depend on the specific form of this measure, to simplify our experiment we restricted our attention to tasks that have binary outcomes such as yes/no and same/different type judgments. In this case, the FDP calculates the probability that a user will perform the task differently using different images. EXPERIMENTS To define the major properties of FDPs and to compare them to VDPs, we conducted a multipart psychophysical experiment. Although we would eventually like to develop FDPs for a wide range of tasks and image differences, initially we needed to restrict our studies to a manageable domain. We chose to study two tasks that have widespread utility in both computer graphics and real world applications: material and spatial layout estimation. Since previous studies [6,8] have shown that material and layout perception are affected by the characteristics of surface reflections, we manipulated this visual cue to determine how errors in rendering reflections affect task performance. A. Stimuli To measure the relationships between physical image differences and functional differences in performance, we presented subjects with pairs of computer generated images. The scene model used to generate the images consisted of a sphere and two cylinders in a box illuminated by an overhead area light source. All materials were achromatic, and the sphere was rendered using specular reflections to simulate a glossy surface. Images were generated using 3dStudioMax. The Appendix shows the geometric layout of the scene and provides numerical values for the parameters used to generate the images. Examples of the images are shown in Fig. 2. On each trial the two images presented were the same except for the reflections in the surfaces of the spheres. In one image the reflection was a correct representation of the scene while in the other it was incorrect either in terms of the contrasts or positions of the reflected cylinders. For each of four contrast and four position conditions, we generated sets of images using different scene parameters. Each of these sets consisted of three image pairs, where the magnitudes of the errors increased within the set. In the rest of the paper, we will refer to these sets with the labels C1 to C4 for the contrast conditions and P1 to P4 for the position conditions. B. Procedure The subjects were asked four questions about each image pair. The specific wording of the questions is given in the Appendix. The four questions were related to four visual tasks we asked the subjects to perform. The material estimation and layout estimation tasks where designed to measure functional differences between the images. The image difference and image correctness tasks were designed to measure more traditional visual differences. In the material estimation task, we asked the subjects if the spheres in both images were made of the same material. In the layout estimation task, we asked if the relative positions of objects were the same in both images. The proportion of negative responses to these two questions is a direct measure Contrast error Position error Correct Image Small error Large error Fig. 2. Examples of stimuli used in the experiment.

3 of the difference in utility of the images for the respective task (i.e. the probability that the subjects performance will be affected by the difference). In the image difference task we asked subjects if they could see any differences between the images. Our goal here was to compare the predictions our new FDPs and traditional VDPs. However, rather than using a VDP algorithm to measure image differences, we simply asked the subjects if the images were the same or not. The proportion of negative responses to this question is a direct measure of the visible differences (i.e. the probability that subjects can detect a difference between the images). Effectively this question lets us benchmark our FDPs against the best possible VDP: the human observer. Finally, in the image correctness task, we asked the subjects if they could tell which image was correctly reflecting the surrounding environment. We asked this question to explore whether differences in performance on the material and layout estimation tasks depend on being able to correctly identify image errors. Eighteen subjects participated in the experiment. The subjects were the second author, 6 computer graphics graduate students and researchers, and 11 graduate students and researchers in other engineering fields. All subjects had normal or corrected to normal vision, and with the exception of the author, were naïve to the purpose and methods of the experiment. The experiment was conducted on paper. Each image pair was printed side-by-side at the top of a page and each of the four questions were printed below. The images were tonemapped and color corrected for the printing process using the procedure described in [8]. The subjects replied to the questions by marking checkboxes on each sheet. Each subject was shown each image pair only once. The pairs were presented in random order, and the horizontal positions of the correct and incorrect images were randomized and balanced across subjects. RESULTS Fig. 3 summarizes the results of the experiment. Each a) Image Difference Task (VDP) column represents the results for one image set. Column series C1 to C4 show the results for images with contrast differences, and series P1 to P4 show results for images with positional differences. Within each series, the results are graphed in order of increasing magnitude of physical image difference. Since the subjects responses were binary variables, we used binomial distribution statistics to compute mean and variance measures and used the logistic regression method when testing for correlations [17]. Chi square tests for statistical significance were also performed for all data points. Unless otherwise mentioned, the confidence intervals on all data points are 0.02 or less. A. Image difference task The results for the image difference task are shown in Fig. 3a. Here visible difference is expressed as the probability that subjects reported seeing the images as different. Note that in each series, the smallest images differences were always just noticeable (i.e. at or above the standard 75% discrimination threshold) and larger differences were always clearly visible. B. Image correctness task The results for the image correctness task are shown in Fig 3b. Here performance is expressed as the probability that the subjects selected the image that was a correct representation of the scene. The graph shows that in contrast to the subjects high level of performance on the image difference task, here performance was much more mixed and often below threshold, suggesting that subjects were frequently unsure about which image was an accurate representation of the scene. C. Material estimation task The results for the material estimation task are shown in Fig. 3c. Here performance is expressed as the probability that the subjects saw the spheres as being made of different materials. The graph shows that this task is clearly affected by contrast differences in the reflections. When contrast differences were present, subjects consistently saw the spheres as being made of different materials. On the other hand, positional b) Image Correctness Task c) Material Estimation Task (FDP for material) d) Layout Estimation Task (FDP for layout) Fig. 3. Results. Yellow: contrast series. Blue: position series. Error bars are twice the standard error.

4 differences in the reflections did not produce differences in material appearance. D. Layout estimation task The results for the layout estimation task are shown in Fig 3d. Here performance is expressed as the probability that the subjects saw the objects as being in different locations in the two images. Here the results are reversed with respect to the material estimation task. Differences in the positions of the reflections caused the layouts to appear different, but contrast differences in the reflections had no effect on layout appearance. DISCUSSION Several implications of the experimental results should be mentioned. First, logistic regressions found no correlations between the magnitude of the errors introduced into the images and the subjects performance in the material and layout tasks (p > 0.56 in all cases). It is interesting to note that under these conditions, although the range of error magnitudes introduced is large, varying from just visible to completely objectionable, the subjects abilities to perform the tasks seems not to depend on the magnitude of the errors. It appears that subjects are either totally affected or totally unaffected by the errors, depending on the type of error introduced. This result was surprising, and is counter to the tacit assumption underlying many visible difference metrics: that suprathreshold error magnitude and task performance are monotonically related. Second, we found no correlations between the subjects performance in the image difference task and their performance in the material and layout estimation tasks (p > 0.45 in all cases). This indicates that the visibility of image differences is not a good predictor of subjects performance on the latter tasks. In particular this suggests that, for many visual tasks, VDPs may be too conservative as image difference metrics, for while they can predict whether two images will be visibly different, they cannot predict whether these differences will have any meaningful affect on task performance. FORMULATING FDP METRICS Based on the findings of our experiments, we can now formulate FDP metrics for graphics applications. As we mentioned earlier, there will be separate formulations for each task. We write: 1 for contrast errors FDPmaterial estimation = 0 for position errors 0 for constrast errors FDPlayout estimation = 1 for position errors We can apply these metrics in graphics applications in the following ways. Imagine that a user is trying to model a threedimensional scene. The application is trying to provide high quality images at interactive rates, but computational resources are not sufficient, and rendering shortcuts need to be taken. If it can be determined that the user is adjusting object material properties (either through automatic mode tracking or manual user preference settings), then the decision can be made to preserve reflection contrasts at the expense of introducing positional errors in the reflections (e.g. by the use of environment mapping techniques). Similar, (though in this case opposite) rendering decisions can be made if the user is moving the objects in the scene. It should be noted that the use of FDPs does not preclude the use of traditional VDPs in perceptually-based rendering, and in fact FDPs can work in concert with VDPs to focus the computational effort involved in applying VDPs to situations where it has been determined that visible errors will negatively impact user performance. It should also be emphasized that although for the tasks and errors we studied, the FDPs have simple binary formulations, FDP metrics for other tasks and errors may be more complex functions which might depend on the magnitudes of the errors as well as other factors. Further study is clearly necessary to develop with a general formulation for FDP metrics. CONCLUSIONS AND FUTURE WORK This paper has introduced Functional Difference Predictors (FDPs), a new class of perceptually-based image difference metrics. Unlike VDPs that predict whether images will be visibly different, FDPs predict whether images will be functionally different, affecting a user s ability to perform a visual task. In our experimental studies, we have introduced a new methodology for measuring functional differences between images and the results of the experiments have shown that in the cases studied, FDPs are superior to VDPs at predicting how rendering errors will affect a user s ability to perform a visual task. Although our initial experiment only looked at two tasks, material and layout estimation, we believe that our our methodology can be used to explore and develop FDPs for a wide range of meaningful visual tasks. Although we feel our initial results are promising, there is clearly much more work to be done to fully develop FDP metrics and our studies are only a small first step toward this goal. In future work, we hope to develop FDPs for other classes of visual tasks and other kinds of image errors. At first glance this might seem unachievable since there are potentially an infinite variety of tasks and errors. However we believe that the problem is tractable, because recent perception research [12,14,15] has shown that visual tasks can be organized into classes in terms of the visual information that is essential for the task and the information that is marginal or irrelevant. A single formulation of an FDP should suffice for each class. Similarly the image errors produced by common graphics or imaging algorithms can be organized into a small number of classes (e.g. noise, projective distortions, etc.) and FDPs can be tailored to the error classes produced by particular algorithms. In the context of computer graphics, we would also like to extend the FDP framework to include both photorealistic and non-realistic rendering styles. This would allow us to assess the impact of image realism on a user s ability to perform the

5 task they are trying to do. For example, using the methods described, we should be able to quantify when technicalillustration-like renderings are superior to realistic images and vice versa. This should enable the development of efficient but high-fidelity rendering methods where the rendering style is optimized the task at hand. ACKNOWLEDGEMENTS We would like to thank Steve Westin and James Cutting for their useful comments, all of our test subjects for their efforts and patience, and Ben Watson for his VDP processing expertise. This work was supported by NSF grants ASC , IIS , and the Cornell Program of Computer Graphics, and was performed on equipment generously donated by Intel Corporation. REFERENCES [1] Arvo, J., Torrance, K. and Smits, B A Framework for the Analysis of Error in Global Illumination Algorithms. In Proceedings of SIGGRAPH 1994, [2] Bolin, M.R. and Meyer, G.W A Perceptually Based Adaptive Sampling Algorithm. In Proceedings of SIGGRAPH 1998, [3] Daly, S The visual difference predictor: An Algorithm for the assessment of visual fidelity. Digital Image and Human Vision, MIT Press. [4] Gibson, J.J The Ecological Approach to Visual Perception. Lawrence Erlbaum Associated. [5] Lubin, J A visual discrimination model for imaghing system design and evaluation. Vision Models for target detection and recognition, World Scientific, E. Peli editor. [6] Madison, C., Thompson, W., Kersten, D., Shirley, P and Smits, B Use of interreflection and shadow for surface contact. Perception & Psychophysics, Psychonomic Society Publications, 63(2), [7] Myszkowski, K., Tawara, T., Akamine, H. and Sidel, H.P Perception-Guided Global Illumination Solution for Animation Rendering. In Proceedings of SIGGRAPH 2001, [8] Pellacini, F., Ferwerda, J.A. and Greenberg, D.P Toward a Psychophysically-based Light Reflection Model for Image Synthesis. In Proceedings of SIGGRAPH 2000, [9] Palmer, S.E Vision Science. MIT Press. [10] Rademacher, P, Lengyel, J., Cutrell, E. and Whitted, T Measuring the Perception of Visual Realism in Images. In Rendering Techniques 2001, [11] Ramasubramanian, M., Pattanaik, S.N. and Greenberg, D.P A Perceptually Based Physical Error Metric for Realistic Image Synthesis. In Proceedings of SIGGRAPH 1999, [12] Rodger, J.C. and Browse R.A. (2000). Choosing rendering parameters for the effective communication of 3D shape. IEEE Computer Graphics and Applications, 20(2), [13] Rushmeier, H., Rogowitz, B., and Piatko, C Perceptual issues in substituting texture for geometry. In Human Vision and Electronic Images V, Proc. Of SPIE, 3959, [14] Schrater, P. R., & Kersten, D Statistical structure and task dependence in visual cue integration. Workshop on Statistical and Computational Theories of Vision - Modeling, Learning, Computing, and Sampling. Fort Collins, Colorado, June [15] Wanger, L.R., Ferwerda, J.A. and Greenberg, D.P Perceiving Spatial Relationships in Computer-generated Images. IEEE Computer Graphics and Applications, 12(3), [16] Watson, B., Friedman, A., McGraffey, A Measuring and Predicting Visual Fidelity. In Proceedings of SIGGRAPH 2001, [17] Winer, B.J., Brown, D. and Michels, K Statistical Principles in Experimental Psychology. McGraw-Hill. APPENDIX The figure on the right shows the spatial layout of the scene used to generate images for the experiment. The scene model consisted of a sphere and two cylinders in a box illuminated by an overhead area light source. The room surfaces had a Top view Front view Area light diffuse reflectance of 0.7, while the sphere had a diffuse reflectance of 0.26 and a specular coefficient of 0.3. The following table reports the diffuse reflectances of the cylinders and the angle indicated in the previous figure, for each image (when values change within a series, they are the ones used for the correct image and the three variations respectively). Series label Cylinders albedo Cylinders angle ( ) in degrees C1 0.5, 0.67,0.83,1 0 C2 0.5, 0.33, 0.17, 0 0 C3 1, 0.83, 0.67, C4, 0.42, 0.58, 0 P , 11.25, 22.5, P , , -22.5, P , 50.62, 56.25, 61.87, P , 39.38, 33.75, The following questions were asked in the experiment: 1. Image difference task. Are the two images the same?. Possible answer: Yes/No. 2. Image correctness task. Which of these two spheres correctly reflects the environment?. Possible answer: Left/Right. 3. Material estimation task. Looking at the reflections on the two spheres, are the spheres made of the same material?. Possible answer: Yes/No. 4. Layout estimation task. Looking at the reflections on the two spheres, are the relative positions of the spheres and the cylinders the same in both images?. Possible answer: Yes/No.

We present a method to accelerate global illumination computation in pre-rendered animations

We present a method to accelerate global illumination computation in pre-rendered animations Attention for Computer Graphics Rendering Hector Yee PDI / DreamWorks Sumanta Pattanaik University of Central Florida Corresponding Author: Hector Yee Research and Development PDI / DreamWorks 1800 Seaport

More information

Using surface markings to enhance accuracy and stability of object perception in graphic displays

Using surface markings to enhance accuracy and stability of object perception in graphic displays Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information

More information

Envisioning the Material World

Envisioning the Material World VISION Vol. 22, No. 1, 49 59, 2010 Envisioning the Material World James A. FERWERDA Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science Rochester Institute of Technology, Rochester

More information

Faceting Artifact Analysis for Computer Graphics

Faceting Artifact Analysis for Computer Graphics Faceting Artifact Analysis for Computer Graphics Lijun Qu Gary Meyer Department of Computer Science and Engineering and Digital Technology Center University of Minnesota {lijun,meyer}@cs.umn.edu Abstract

More information

Acceptable distortion and magnification of images on reflective surfaces in an augmented-reality system

Acceptable distortion and magnification of images on reflective surfaces in an augmented-reality system Manuscript Click here to view linked References Click here to download Manuscript 0_OPRErevised_manuscript(correced).docx OPTICAL REVIEW 0 0 0 0 0 0 Acceptable distortion and magnification of images on

More information

)LGHOLW\0HWUL VIRU$QLPDWLRQ

)LGHOLW\0HWUL VIRU$QLPDWLRQ )LGHOLW\0HWUL VIRU$QLPDWLRQ &DURO2 6XOOLYDQ,PDJH 6\QWKHVLV *URXS 7ULQLW\ &ROOHJH 'XEOLQ 'XEOLQ,UHODQG &DURO26XOOLYDQ# VW GLH ABSTRACT In this paper, the problem of evaluating the fidelity of animations

More information

Evaluation of Two Principal Approaches to Objective Image Quality Assessment

Evaluation of Two Principal Approaches to Objective Image Quality Assessment Evaluation of Two Principal Approaches to Objective Image Quality Assessment Martin Čadík, Pavel Slavík Department of Computer Science and Engineering Faculty of Electrical Engineering, Czech Technical

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Effects of Global Illumination Approximations on Material Appearance

Effects of Global Illumination Approximations on Material Appearance Effects of Global Illumination Approximations on Material Appearance Jaroslav Křivánek Cornell University & Charles University, Prague James Ferwerda Rochester Institute of Technology Kavita Bala Cornell

More information

Computer graphics and visualization

Computer graphics and visualization CAAD FUTURES DIGITAL PROCEEDINGS 1986 63 Chapter 5 Computer graphics and visualization Donald P. Greenberg The field of computer graphics has made enormous progress during the past decade. It is rapidly

More information

Perceived shininess and rigidity - Measurements of shape-dependent specular flow of rotating objects

Perceived shininess and rigidity - Measurements of shape-dependent specular flow of rotating objects Perceived shininess and rigidity - Measurements of shape-dependent specular flow of rotating objects Katja Doerschner (1), Paul Schrater (1,,2), Dan Kersten (1) University of Minnesota Overview 1. Introduction

More information

Visual Glue. William B. Thompson Peter Shirley Brian Smits Computer Science Department University of Utah

Visual Glue. William B. Thompson Peter Shirley Brian Smits Computer Science Department University of Utah Visual Glue William B. Thompson Peter Shirley Brian Smits Computer Science Department University of Utah Daniel J. Kersten Cindee Madison Psychology Department University of Minnesota University of Utah

More information

Using Perceptual Texture Masking for Efficient Image Synthesis

Using Perceptual Texture Masking for Efficient Image Synthesis EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel (Guest Editors) Volume 21 (2002), Number 3 Using Perceptual Texture Masking for Efficient Image Synthesis Bruce Walter Sumanta N. Pattanaik Donald P. Greenberg

More information

Statistical Acceleration for Animated Global Illumination

Statistical Acceleration for Animated Global Illumination Statistical Acceleration for Animated Global Illumination Mark Meyer John Anderson Pixar Animation Studios Unfiltered Noisy Indirect Illumination Statistically Filtered Final Comped Frame Figure 1: An

More information

Using Perceptual Texture Masking for Efficient Image Synthesis

Using Perceptual Texture Masking for Efficient Image Synthesis EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel (Guest Editors) Volume 21 (2002), Number 3 Using Perceptual Texture Masking for Efficient Image Synthesis Abstract Texture mapping has become indispensable

More information

A Qualitative Analysis of 3D Display Technology

A Qualitative Analysis of 3D Display Technology A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract

More information

Measuring the Perception of Visual Realism in Images

Measuring the Perception of Visual Realism in Images Measuring the Perception of Visual Realism in Images Paul Rademacher Jed Lengyel Edward Cutrell Turner Whitted University of North Carolina at Chapel Hill Microsoft Research Abstract. One of the main goals

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Joint design of data analysis algorithms and user interface for video applications

Joint design of data analysis algorithms and user interface for video applications Joint design of data analysis algorithms and user interface for video applications Nebojsa Jojic Microsoft Research Sumit Basu Microsoft Research Nemanja Petrovic University of Illinois Brendan Frey University

More information

BINOCULAR DISPARITY AND DEPTH CUE OF LUMINANCE CONTRAST. NAN-CHING TAI National Taipei University of Technology, Taipei, Taiwan

BINOCULAR DISPARITY AND DEPTH CUE OF LUMINANCE CONTRAST. NAN-CHING TAI National Taipei University of Technology, Taipei, Taiwan N. Gu, S. Watanabe, H. Erhan, M. Hank Haeusler, W. Huang, R. Sosa (eds.), Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer- Aided

More information

3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm

3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm 3D Unsharp Masking for Scene Coherent Enhancement Supplemental Material 1: Experimental Validation of the Algorithm Tobias Ritschel Kaleigh Smith Matthias Ihrke Thorsten Grosch Karol Myszkowski Hans-Peter

More information

Perceptually Based Rendering. Holly Rushmeier Department of Computer Science Yale University

Perceptually Based Rendering. Holly Rushmeier Department of Computer Science Yale University Perceptually Based Rendering Holly Rushmeier Department of Computer Science Yale University Perceptually Based Rendering What is perceptually-based rendering? History: Perception and Rendering Models:

More information

Enhancing Interactive Particle Visualization with Advanced Shading Models

Enhancing Interactive Particle Visualization with Advanced Shading Models Enhancing Interactive Particle Visualization with Advanced Shading Models Christiaan P. Gribble Steven G. Parker Scientific Computing and Imaging Institute School of Computing, University of Utah {cgribble

More information

rendering equation computer graphics rendering equation 2009 fabio pellacini 1

rendering equation computer graphics rendering equation 2009 fabio pellacini 1 rendering equation computer graphics rendering equation 2009 fabio pellacini 1 physically-based rendering synthesis algorithms that compute images by simulation the physical behavior of light computer

More information

Perception-Based Global Illumination, Rendering, and Animation Techniques

Perception-Based Global Illumination, Rendering, and Animation Techniques Perception-Based Global Illumination, Rendering, and Animation Techniques Karol Myszkowski Max-Planck-Institut für Informatik, Stuhlsatzenhausweg 85, 66123 Saarbrücken, Germany karol@mpi-sb.mpg.de Abstract

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 27, 2015 Lecture #18 Goal of Realistic Imaging The resulting images should be physically accurate and

More information

LightSlice: Matrix Slice Sampling for the Many-Lights Problem

LightSlice: Matrix Slice Sampling for the Many-Lights Problem LightSlice: Matrix Slice Sampling for the Many-Lights Problem SIGGRAPH Asia 2011 Yu-Ting Wu Authors Jiawei Ou ( 歐嘉蔚 ) PhD Student Dartmouth College Fabio Pellacini Associate Prof. 2 Rendering L o ( p,

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction The central problem in computer graphics is creating, or rendering, realistic computergenerated images that are indistinguishable from real photographs, a goal referred to as photorealism.

More information

Greg Ward / SIGGRAPH 2003

Greg Ward / SIGGRAPH 2003 Global Illumination Global Illumination & HDRI Formats Greg Ward Anyhere Software Accounts for most (if not all) visible light interactions Goal may be to maximize realism, but more often visual reproduction

More information

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij

COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance

More information

A Multiscale Analysis of the Touch-Up Problem

A Multiscale Analysis of the Touch-Up Problem A Multiscale Analysis of the Touch-Up Problem Journal: CIC18 Manuscript ID: Draft Presentation Type: Oral Date Submitted by the Author: n/a Complete List of Authors: Ferwerda, James; Rochester Institute

More information

STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES

STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES STUDY ON DISTORTION CONSPICUITY IN STEREOSCOPICALLY VIEWED 3D IMAGES Ming-Jun Chen, 1,3, Alan C. Bovik 1,3, Lawrence K. Cormack 2,3 Department of Electrical & Computer Engineering, The University of Texas

More information

Using the Visual Differences Predictor to Improve Performance of Progressive Global Illumination Computation

Using the Visual Differences Predictor to Improve Performance of Progressive Global Illumination Computation Using the Visual Differences Predictor to Improve Performance of Progressive Global Illumination Computation VLADIMIR VOLEVICH University of Aizu KAROL MYSZKOWSKI Max-Planck-Institute for Computer Science

More information

Introduction. Chapter Computer Graphics

Introduction. Chapter Computer Graphics Chapter 1 Introduction 1.1. Computer Graphics Computer graphics has grown at an astounding rate over the last three decades. In the 1970s, frame-buffers capable of displaying digital images were rare and

More information

Selective rendering for efficient ray traced stereoscopic images

Selective rendering for efficient ray traced stereoscopic images Vis Comput (2010) 26: 97 107 DOI 10.1007/s00371-009-0379-4 ORIGINAL ARTICLE Selective rendering for efficient ray traced stereoscopic images Cheng-Hung Lo Chih-Hsing Chu Kurt Debattista Alan Chalmers Published

More information

A GPU based Saliency Map for High-Fidelity Selective Rendering

A GPU based Saliency Map for High-Fidelity Selective Rendering A GPU based Saliency Map for High-Fidelity Selective Rendering Peter Longhurst University of Bristol Kurt Debattista University of Bristol Alan Chalmers University of Bristol Abstract The computation of

More information

Spatial Standard Observer for Visual Technology

Spatial Standard Observer for Visual Technology Spatial Standard Observer for Visual Technology Andrew B. Watson NASA Ames Research Center Moffett Field, CA 94035-1000 Andrew.b.watson@nasa.gov Abstract - The Spatial Standard Observer (SSO) was developed

More information

A Perceptually Based Adaptive Sampling Algorithm

A Perceptually Based Adaptive Sampling Algorithm A Perceptually Based Adaptive Sampling Algorithm Mark R Bolin Gary W Meyer Department of Computer and Information Science University of Oregon Eugene, OR 97403 Abstract A perceptually based approach for

More information

Visual Perception in Realistic Image Synthesis

Visual Perception in Realistic Image Synthesis Volume 20 (2001), number 4 pp. 211 224 COMPUTER GRAPHICS forum Visual Perception in Realistic Image Synthesis Ann McNamara Department of Computer Science, Trinity College, Dublin 2, Ireland Abstract Realism

More information

InfoVis: a semiotic perspective

InfoVis: a semiotic perspective InfoVis: a semiotic perspective p based on Semiology of Graphics by J. Bertin Infovis is composed of Representation a mapping from raw data to a visible representation Presentation organizing this visible

More information

A Perception-Based Threshold for Bidirectional Texture Functions

A Perception-Based Threshold for Bidirectional Texture Functions A Perception-Based Threshold for Bidirectional Texture Functions Banafsheh Azari (banafsheh.azari@uni-weimar.de) CogVis/MMC, Faculty of Media, Bauhaus-Universität Weimar, Bauhausstraße 11, 99423 Weimar,

More information

Radiance on S mall S creen Devices

Radiance on S mall S creen Devices Radiance on S mall S creen Devices Matt Aranha, Alan Chalmers and Steve Hill University of Bristol and ST Microelectronics 5th International Radiance Scientific Workshop 13 th September 2006 Overview Research

More information

SUMMARY. CS380: Introduction to Computer Graphics Ray tracing Chapter 20. Min H. Kim KAIST School of Computing 18/05/29. Modeling

SUMMARY. CS380: Introduction to Computer Graphics Ray tracing Chapter 20. Min H. Kim KAIST School of Computing 18/05/29. Modeling CS380: Introduction to Computer Graphics Ray tracing Chapter 20 Min H. Kim KAIST School of Computing Modeling SUMMARY 2 1 Types of coordinate function Explicit function: Line example: Implicit function:

More information

The Effects of Semantic Grouping on Visual Search

The Effects of Semantic Grouping on Visual Search To appear as Work-in-Progress at CHI 2008, April 5-10, 2008, Florence, Italy The Effects of Semantic Grouping on Visual Search Figure 1. A semantically cohesive group from an experimental layout. Nuts

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute Lightcuts Jeff Hui Advanced Computer Graphics 2010 Rensselaer Polytechnic Institute Fig 1. Lightcuts version on the left and naïve ray tracer on the right. The lightcuts took 433,580,000 clock ticks and

More information

MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT. (Invited Paper)

MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT. (Invited Paper) MULTI-SCALE STRUCTURAL SIMILARITY FOR IMAGE QUALITY ASSESSMENT Zhou Wang 1, Eero P. Simoncelli 1 and Alan C. Bovik 2 (Invited Paper) 1 Center for Neural Sci. and Courant Inst. of Math. Sci., New York Univ.,

More information

Multidimensional image retargeting

Multidimensional image retargeting Multidimensional image retargeting 9:00: Introduction 9:10: Dynamic range retargeting Tone mapping Apparent contrast and brightness enhancement 10:45: Break 11:00: Color retargeting 11:30: LDR to HDR 12:20:

More information

Visualization Insider A Little Background Information

Visualization Insider A Little Background Information Visualization Insider A Little Background Information Visualization Insider 2 Creating Backgrounds for 3D Scenes Backgrounds are a critical part of just about every type of 3D scene. Although they are

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

Assignment 6: Ray Tracing

Assignment 6: Ray Tracing Assignment 6: Ray Tracing Programming Lab Due: Monday, April 20 (midnight) 1 Introduction Throughout this semester you have written code that manipulated shapes and cameras to prepare a scene for rendering.

More information

OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT

OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT ANJIBABU POLEBOINA 1, M.A. SHAHID 2 Digital Electronics and Communication Systems (DECS) 1, Associate

More information

Detail to Attention: Exploiting Visual Tasks for Selective Rendering

Detail to Attention: Exploiting Visual Tasks for Selective Rendering Eurographics Symposium on Rendering 2003 Per Christensen and Daniel Cohen-Or (Editors) Detail to Attention: Exploiting Visual Tasks for Selective Rendering K. Cater, 1 A. Chalmers 1 and G. Ward 2 1 Department

More information

Advanced Deferred Rendering Techniques. NCCA, Thesis Portfolio Peter Smith

Advanced Deferred Rendering Techniques. NCCA, Thesis Portfolio Peter Smith Advanced Deferred Rendering Techniques NCCA, Thesis Portfolio Peter Smith August 2011 Abstract The following paper catalogues the improvements made to a Deferred Renderer created for an earlier NCCA project.

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 3, 2017 Lecture #13 Program of Computer Graphics, Cornell University General Electric - 167 Cornell in

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

Image Fusion For Context Enhancement and Video Surrealism

Image Fusion For Context Enhancement and Video Surrealism DIP PROJECT REPORT Image Fusion For Context Enhancement and Video Surrealism By Jay Guru Panda (200802017) Shashank Sharma(200801069) Project Idea: Paper Published (same topic) in SIGGRAPH '05 ACM SIGGRAPH

More information

Light Reflection Models

Light Reflection Models Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 21, 2014 Lecture #15 Goal of Realistic Imaging From Strobel, Photographic Materials and Processes Focal Press, 186.

More information

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary Summary Ambien Occlusion Kadi Bouatouch IRISA Email: kadi@irisa.fr 1. Lighting 2. Definition 3. Computing the ambient occlusion 4. Ambient occlusion fields 5. Dynamic ambient occlusion 1 2 Lighting: Ambient

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University Ray Tracing III Wen-Chieh (Steve) Lin National Chiao-Tung University Shirley, Fundamentals of Computer Graphics, Chap 10 Doug James CG slides, I-Chen Lin s CG slides Ray-tracing Review For each pixel,

More information

Visualizing Flow Fields by Perceptual Motion

Visualizing Flow Fields by Perceptual Motion Visualizing Flow Fields by Perceptual Motion Li-Yi Wei Wei-Chao Chen Abstract Visualizing flow fields has a wide variety of applications in scientific simulation and computer graphics. Existing approaches

More information

COMP 4801 Final Year Project. Ray Tracing for Computer Graphics. Final Project Report FYP Runjing Liu. Advised by. Dr. L.Y.

COMP 4801 Final Year Project. Ray Tracing for Computer Graphics. Final Project Report FYP Runjing Liu. Advised by. Dr. L.Y. COMP 4801 Final Year Project Ray Tracing for Computer Graphics Final Project Report FYP 15014 by Runjing Liu Advised by Dr. L.Y. Wei 1 Abstract The goal of this project was to use ray tracing in a rendering

More information

Automatic Colorization of Grayscale Images

Automatic Colorization of Grayscale Images Automatic Colorization of Grayscale Images Austin Sousa Rasoul Kabirzadeh Patrick Blaes Department of Electrical Engineering, Stanford University 1 Introduction ere exists a wealth of photographic images,

More information

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker CMSC427 Advanced shading getting global illumination by local methods Credit: slides Prof. Zwicker Topics Shadows Environment maps Reflection mapping Irradiance environment maps Ambient occlusion Reflection

More information

Selective Tone Mapper

Selective Tone Mapper Selective Tone Mapper Author: Alessandro Artusi, Intercollege Cyprus, artusi.a@intercollege.ac.cy Author: Benjamin Roch Author: Yiorgos Chrysanthou, University of Cyprus, Yiorgos@ucy.ac.cy Author: Despina

More information

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Efficient Rendering of Glossy Reflection Using Graphics Hardware Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,

More information

Introduction to Visible Watermarking. IPR Course: TA Lecture 2002/12/18 NTU CSIE R105

Introduction to Visible Watermarking. IPR Course: TA Lecture 2002/12/18 NTU CSIE R105 Introduction to Visible Watermarking IPR Course: TA Lecture 2002/12/18 NTU CSIE R105 Outline Introduction State-of of-the-art Characteristics of Visible Watermarking Schemes Attacking Visible Watermarking

More information

Quality Metric for Shadow Rendering

Quality Metric for Shadow Rendering Proceedings of the Federated Conference on Computer Science DOI: 10.15439/2016F494 and Information Systems pp. 791 796 ACSIS, Vol. 8. ISSN 2300-5963 Quality Metric for Shadow Rendering Krzysztof Kluczek

More information

Holly Rushmeier a, Bernice E. Rogowitz a, Christine Piatko b

Holly Rushmeier a, Bernice E. Rogowitz a, Christine Piatko b Perceptual Issues in Substituting Texture for Geometry Holly Rushmeier a, Bernice E. Rogowitz a, Christine Piatko b a IBM TJ Watson Research Center, P.O. Box 74, Yorktown, NY, USA b Johns Hopkins Applied

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Perceptually Guided Simplification of Lit, Textured Meshes

Perceptually Guided Simplification of Lit, Textured Meshes Perceptually Guided Simplification of Lit, Textured Meshes Nathaniel Williams * David Luebke Jonathan D. Cohen Michael Kelley Brenden Schubert ABSTRACT We present a new algorithm for best-effort simplification

More information

Computer Graphics. Illumination and Shading

Computer Graphics. Illumination and Shading () Illumination and Shading Dr. Ayman Eldeib Lighting So given a 3-D triangle and a 3-D viewpoint, we can set the right pixels But what color should those pixels be? If we re attempting to create a realistic

More information

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker CMSC427 Shading Intro Credit: slides from Dr. Zwicker 2 Today Shading Introduction Radiometry & BRDFs Local shading models Light sources Shading strategies Shading Compute interaction of light with surfaces

More information

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye Ray Tracing What was the rendering equation? Motivate & list the terms. Relate the rendering equation to forward ray tracing. Why is forward ray tracing not good for image formation? What is the difference

More information

A Framework for Global Illumination in Animated Environments

A Framework for Global Illumination in Animated Environments A Framework for Global Illumination in Animated Environments Jeffry Nimeroff 1 Julie Dorsey 2 Holly Rushmeier 3 1 University of Pennsylvania, Philadelphia PA 19104, USA 2 Massachusetts Institute of Technology,

More information

Selective Rendering: Computing only what you see

Selective Rendering: Computing only what you see Selective Rendering: Computing only what you see Alan Chalmers University of Bristol, UK Kurt Debattista University of Bristol, UK Luis Paulo dos Santos University of Minho, Portugal Abstract The computational

More information

MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES. Angela D Angelo, Mauro Barni

MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES. Angela D Angelo, Mauro Barni MULTIRESOLUTION QUALITY EVALUATION OF GEOMETRICALLY DISTORTED IMAGES Angela D Angelo, Mauro Barni Department of Information Engineering University of Siena ABSTRACT In multimedia applications there has

More information

Global Rendering. Ingela Nyström 1. Effects needed for realism. The Rendering Equation. Local vs global rendering. Light-material interaction

Global Rendering. Ingela Nyström 1. Effects needed for realism. The Rendering Equation. Local vs global rendering. Light-material interaction Effects needed for realism Global Rendering Computer Graphics 1, Fall 2005 Lecture 7 4th ed.: Ch 6.10, 12.1-12.5 Shadows Reflections (Mirrors) Transparency Interreflections Detail (Textures etc.) Complex

More information

Perceptually Guided Simplification of Lit, Textured Meshes

Perceptually Guided Simplification of Lit, Textured Meshes Perceptually Guided Simplification of Lit, Textured Meshes David Luebke 1, Jonathan Cohen 2, Nathaniel Williams 1, Mike Kelly 1, Brenden Schubert 1 1 University of Virginia, 2 Johns Hopkins University

More information

Efficient and Effective Quality Assessment of As-Is Building Information Models and 3D Laser-Scanned Data

Efficient and Effective Quality Assessment of As-Is Building Information Models and 3D Laser-Scanned Data Efficient and Effective Quality Assessment of As-Is Building Information Models and 3D Laser-Scanned Data Pingbo Tang 1, Engin Burak Anil 2, Burcu Akinci 2, Daniel Huber 3 1 Civil and Construction Engineering

More information

Introduction. Psychometric assessment. DSC Paris - September 2002

Introduction. Psychometric assessment. DSC Paris - September 2002 Introduction The driving task mainly relies upon visual information. Flight- and driving- simulators are designed to reproduce the perception of reality rather than the physics of the scene. So when it

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

WATERMARKING FOR LIGHT FIELD RENDERING 1

WATERMARKING FOR LIGHT FIELD RENDERING 1 ATERMARKING FOR LIGHT FIELD RENDERING 1 Alper Koz, Cevahir Çığla and A. Aydın Alatan Department of Electrical and Electronics Engineering, METU Balgat, 06531, Ankara, TURKEY. e-mail: koz@metu.edu.tr, cevahir@eee.metu.edu.tr,

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

Image resizing and image quality

Image resizing and image quality Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 2001 Image resizing and image quality Michael Godlewski Follow this and additional works at: http://scholarworks.rit.edu/theses

More information

COMP environment mapping Mar. 12, r = 2n(n v) v

COMP environment mapping Mar. 12, r = 2n(n v) v Rendering mirror surfaces The next texture mapping method assumes we have a mirror surface, or at least a reflectance function that contains a mirror component. Examples might be a car window or hood,

More information

Skin Color Transfer. Introduction. Other-Race Effect. This work

Skin Color Transfer. Introduction. Other-Race Effect. This work Skin Color Transfer This work Introduction Investigates the importance of facial skin color for race differentiation Explore the potential information for other-race race effect Basic Idea Match the average

More information

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability 7 Fractions GRADE 7 FRACTIONS continue to develop proficiency by using fractions in mental strategies and in selecting and justifying use; develop proficiency in adding and subtracting simple fractions;

More information

A New Time-Dependent Tone Mapping Model

A New Time-Dependent Tone Mapping Model A New Time-Dependent Tone Mapping Model Alessandro Artusi Christian Faisstnauer Alexander Wilkie Institute of Computer Graphics and Algorithms Vienna University of Technology Abstract In this article we

More information

Object Purpose Based Grasping

Object Purpose Based Grasping Object Purpose Based Grasping Song Cao, Jijie Zhao Abstract Objects often have multiple purposes, and the way humans grasp a certain object may vary based on the different intended purposes. To enable

More information

Intra-Mode Indexed Nonuniform Quantization Parameter Matrices in AVC/H.264

Intra-Mode Indexed Nonuniform Quantization Parameter Matrices in AVC/H.264 Intra-Mode Indexed Nonuniform Quantization Parameter Matrices in AVC/H.264 Jing Hu and Jerry D. Gibson Department of Electrical and Computer Engineering University of California, Santa Barbara, California

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Photometric Stereo with Auto-Radiometric Calibration

Photometric Stereo with Auto-Radiometric Calibration Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp

More information

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1)

Computer Graphics. Lecture 14 Bump-mapping, Global Illumination (1) Computer Graphics Lecture 14 Bump-mapping, Global Illumination (1) Today - Bump mapping - Displacement mapping - Global Illumination Radiosity Bump Mapping - A method to increase the realism of 3D objects

More information

Real-Time Image Based Lighting in Software Using HDR Panoramas

Real-Time Image Based Lighting in Software Using HDR Panoramas Real-Time Image Based Lighting in Software Using HDR Panoramas Jonas Unger, Magnus Wrenninge, Filip Wänström and Mark Ollila Norrköping Visualization and Interaction Studio Linköping University, Sweden

More information

Photo Studio Optimizer

Photo Studio Optimizer CATIA V5 Training Foils Photo Studio Optimizer Version 5 Release 19 September 008 EDU_CAT_EN_PSO_FF_V5R19 Photo Studio Optimizer Objectives of the course Upon completion of this course you will be able

More information

Object perception by primates

Object perception by primates Object perception by primates How does the visual system put together the fragments to form meaningful objects? The Gestalt approach The whole differs from the sum of its parts and is a result of perceptual

More information

Color Universal Design without Restricting Colors and Their Combinations Using Lightness Contrast Dithering

Color Universal Design without Restricting Colors and Their Combinations Using Lightness Contrast Dithering Color Universal Design without Restricting Colors and Their Combinations Using Lightness Contrast Dithering Yuki Omori*, Tamotsu Murakami**, Takumi Ikeda*** * The University of Tokyo, omori812@mail.design.t.u-tokyo.ac.jp

More information