Simulating the Effect of Illumination Using Color Transformations

Size: px
Start display at page:

Download "Simulating the Effect of Illumination Using Color Transformations"

Transcription

1 Simulating the Effect of Illumination Using Color Transformations Maya R. Gupta a and Steve Upton b and Jayson Bowen a a Dept. of Electrical Engineering, University of Washington, Seattle, WA b Chromix, Seattle, WA ABSTRACT We investigate design and estimation issues for using the standard color management profile architecture for general custom image enhancement. Color management profiles are a flexible architecture for describing a mapping from an original colorspace to a new colorspace. We investigate use of this same architecture for describing color enhancements that could be defined by a non-technical user using samples of the mapping, just as color management is based on samples of a mapping between an original colorspace and a new colorspace. As an example enhancement, we work with photos of the 24 color patch Macbeth chart under different illuminations, with the goal of defining transformations that would take, for example, a studio D65 image and reproduce it as though it had been taken during a particular sunset. The color management profile architecture includes a lookup-table and interpolation. We concentrate on the estimation of the look-up-table points from minimal number of color enhancement samples (comparing interpolative and extrapolative statistical learning techniques), and evaluate the feasibility of using the color management architecture for custom enhancement definitions. Keywords: ICC profile, custom color enhancement, IME, color mapping, 3D UT 1. INTRODUCTION Digital design and graphics arts is an expanding field, due in part to cheap and easy access to high-quality color printing technologies and the growth and commercialization of the Internet. Design plays an indispensable role in perceptions of quality, design, and branding 1. 2 How do we empower digital designers to create custom color enhancements? Recently, there has been interest in computer-aided color image enhancement that allows users to create complicated or custom effects with limited work or technical knowledge. This paper explores the statistical learning issues that arise in learning a custom color enhancement based on a small number of sample input/output color pairs. In particular, we use the ICC profile architecture to define and implement a custom transformation. As an example application, this paper considers the problem of simulating illumination effects by a color transformation. ICC profiles 3 were designed for color management of devices, such as printers, which require empirical characterization and complex color transformations to model their sampled characteristics. The core of an ICC profile is a three-dimensional look-up-table (UT) that defines how input colors are transformed into output colors. There are two major advantages to making a custom color enhancement into an ICC profile. First, a three-dimensional look-up-table is a very flexible way to define a color transformation. Second, once defined, the color transformation can be implemented by any color management module. This allows users to implement transforms using non-proprietary software and view and edit transforms using color management applications like ColorThink. 4 ICC profiles are standardized by the International Color Consortium to provide an open, vendor-neutral device characterization for color management. Simulating illumination changes is an example of a complex color transformation that can be defined by sample pairs. For example, a designer might wish that a studio photo of a hamburger or model was taken under the light of a certain Namibian sunset. If the designer has captured the effect of that Namibian sunset and the effects of the studio light, then the effect of the sunset can be simulated on the studio photo. In this paper, Further author information: (Send correspondence to M.R.G.) M.R.G.: gupta@ee.washington.edu, S. U.: upton@chromix.com

2 we capture the effect of an illuminant on the 24 color samples of a Gretag Macbeth ColorChecker chart, and then learn a color transformation for the entire colorspace based on input/output pairs of, for example, D65 illumination to a particular sunset light. 2. REATED WORK Aiding designers to produce complex custom color enhancements is a popular area of research, and we reference only a few representative examples. One method to define a custom enhancement is to transform the color palette of an input image based on the color palette of a single reference image 5. 6 In such work, the output image spatially looks like the input image, but has inherited or moved towards the color palette from the reference image. Hertzmann et al. 7 use a pair of reference images to learn a transformation that can then be applied to a third image. They use multi-scale spatial features as well as pixel color values to create image transformations such as adding texture. UT s have previously been used to speed-up known functions that transform colors or images. 8 In Gatta et al. s opinion, 9 The UT transfer function is the simplest way to apply a global filtering to an image using a function previously devised. In contrast to prior work, this paper confines itself to colorspace transformations that can be implemented with a three-dimensional UT, and that are not already implementable by a known function or algorithm, so that the amount of data to learn from is relatively small sample of input/output color pairs (< 100), as might be defined by a graphic artist. This paper explores the use of statistical learning to create an accurate and visually smooth three-dimensional UT transformation. We use simulating illumination changes as an example application and data source. Another way to simulate illumination is to estimate the reference and destination illumination spectra, then divide out the reference spectra from the three color channels of the image, and multiply-in the destination illumination spectra. Direct illumination spectra estimation has been studied in work such as the scanner calibration literature. Since we are interested in an architecture for general color enhancement based on input and output samples, illumination spectra estimation is not general enough for our goals. Whether simulating the effect of an illuminant by estimating and applying the spectra, or as we do in this paper, by learning a color transformation based on samples, there remains the problem of metamers. Objects originally photographed that appear the same in the image may have physical properties that would make them not look the same under different illumination. For example, water and jeans might be metameric objects, both appearing the same color of blue in the image to be transformed. Without semantic knowledge of what the object is, all pixels of a certain color value will be transformed in the same way by the proposed color transformations. Work on semantic object recognition, 10 may one day be coupled with object specific transforms, but in the present work we ignore metameric complications. This work is also related to white balance for digital cameras. White balance algorithms are generally simple and serve a different purpose than the enhancements investigated here. 3. IUMINANT DATA Photographs of a standard 2004 Gretag Macbeth ColorChecker chart under different illumination conditions were used as the data for this project. A Sony DSC-F828 8 MegaPixel camera was used. For each picture, the colorchart and a standard photographer s gray card were set-up on a board. The camera was set to manual mode with the white balance set to daylight. The metering mode was set to spot and the angle of the board was adjusted until all the readings on the gray card were equal. Then the photo was taken. Example photos are shown in Fig. 1. A D65 photo was taken of the chart under a Gretag Macbeth SolSource D65 filtered lamp. For each of the twenty-four color squares in each image, the corresponding srgb pixel values from the camera were averaged, forming a data set of twenty-four srgb values for each illumination condition. The srgb values are then converted to CIEab values (D65 whitepoint) by standard formula. Differences in CIEab between pairs of illuminant samples are shown in Fig. 2.

3 Figure 1. eft: chart photographed under cloudy conditions. Right: chart photographed under a soft white incandescent light. b a b a Figure 2. eft: vectors show the differences between the 24 D65 color chart samples and 24 soft white incandescent color chart samples in three-dimensional CIEab space. Right: vectors show the differences between the 24 cloudy color chart samples and 24 soft white incandescent color chart samples in three-dimensional CIEab space.

4 4. ESTIMATING THE 3D UT Based on the known input/output CIEab color sample pairs, a 3D UT is estimated which maps a 21x21x21 grid in CIEab to corresponding output CIEab color values. Estimating the 3D 21x21x21 grid and its ouput values from only 24 sample pairs is a difficult learning problem, particularly because many of the grid points fall outside the convex hull of the set of 24 samples in the input space. The estimation issues are interpolation vs. extrapolation, a flexible enough fit to capture the underlying transform vs. overfitting the transform, and smoothness and monotonicity of the estimated transform. Because this is a color enhancement, smoothness of the transform around the neutrals is particularly important. In this paper we consider two radically different methods for estimating the 3D UT grid points, local linear regression, and linear interpolation with maximum entropy. Regression methods can be categorized as interpolative or extrapolative. For example, local linear regression 11 on a k-neighbor neighborhood fits a plane to the k samples nearest the grid point in the input space out of the total 24 original input samples. Since a plane is fit to the samples closest to each grid point, the method extrapolates to grid points that are outside the convex hull of the set of 24 samples in the input space. ocal linear regression has been shown to have nice properties near boundaries and for unevenly distributed sample points, 12 as is the case for this application. Consider, on the other hand, common tetrahedral interpolation sometimes used in color management, 13 which linearly interpolates the four samples closest to a particular grid point in the input space in order to estimate that grid point s output value. If a grid point is outside of the convex hull formed by its four nearest samples, then tetrahedral interpolation weights cannot be solved for, since the weights must sum to one. et x be a point on the grid that forms the 3D UT. et ŷ be the estimate of its transformed color. et x 1, x 2,..., x 24 be the 24 sample points in the input colorspace, where they have been indexed by their Euclidean distance to x in the input CIEab space. et y 1, y 2,..., y 24 be the corresponding output, or transformed, CIEab color values. Then tetrahedral interpolation calculates weights w j that solve, 4 j=1 w jx j = x 4 subject to w j = 1 j=1 w j 0 for all j. Once the tetrahedral weights w 1, w 2, w 3, w 4 have been found (if they exist), the tetrahedral interpolation estimate is 4 ŷ = w j y j j=1 As noted above, (1) cannot be solved for all grid points, tetrahedral interpolation can only be used if the grid point x being estimated is in the closure of the convex hull formed by its four nearest-neighbors. inear interpolation with maximum entropy (IME) 14 generalizes linear interpolation so that any number of sample points can be used and there is no problem if the test point x lies outside the convex hull of its neighbors. The IME weights attempt to solve the linear interpolation equations. If x is not in the convex hull of its neighbors, then the IME weights attempt to minimize the distance between the reconstructed point j w jx j and x, which in effect projects x onto the convex hull of the neighbors. If the linear interpolation equations are underdetermined (as may be when k > 4 for three dimensions), then there exist multiple solutions to the linear interpolation equations, and the IME weights are the solution that maximizes entropy. Formally, the IME weights w j solve, ( arg min w k j=1 w jx j x 2 + λ ) k j=1 w j log w j k subject to w j = 1 j=1 w j 0 for all j, (1)

5 where λ and k are algorithmic parameters. In this work, we set λ = 1e 9, so that the weights focus on minimizing the distortion between j w jx j and x. For k = 4, the IME weights will be numerically equivalent to the tetrahedral weights, if the tetrahedral weights exist. Given the IME weights w j, the color value estimated to be the output corresponding to input grid color x is, ŷ = k w j y j (2) j=1 5. RESUTS For different pairs of input/output illumination conditions, we compare local linear regression and IME estimates of the 3D UT by comparing how new points are transformed when the estimated 3D UT transformation is applied. To estimate a new color value z, the eight vertices of its grid cell are determined, and then z s corresponding output color is calculated using trilinear interpolation. 13 First, we consider a test case of neutral colors. After estimating the 3D UT for transforming D65 illumination to a sunset illumination, we transform the neutral axis in CIEab using the estimated 3D mappings. The results are shown in Fig. 3. It is clear from the plots that the estimation algorithm and the number of neighbors used significantly changes the estimation, even in the center of the colorspace. As expected, increasing the number of neighbors leads to a smoother fit, but may smooth out the desired color mapping. Using four near-neighbors with the local linear regression leads to a non-monotonic luminance transformation of the neutrals axis, and large hue variance in the mid-grays. Overall, the IME estimate is less variable with respect to the number of neighbors used. The IME change in hue and luminance is also smoother over the neutral axis for a given number of neighbors. However, some of that smoothness is due to the interpolative nature, which limits the darkest dark and brightest white to the convex hull of the original twenty-four samples. What happens on the edge of the D65 to sunset mapping? In Fig. 4, the output CIEab color values corresponding to the regular input CIEab grid are shown for the grid estimated by IME and the grid estimated by local linear regression, each using six nearest-neighbors. Here, the difference between the interpolative and extrapolative method is clear. Since it is doing only interpolation, the IME grid will map all input colors to a small gamut as defined by the convex hull of the original twenty-four samples. This can lead to images where very saturated object regions are clipped and appear flat. The local linear regression grid maps input colors to the entire space; in fact the colors shown have been clipped to be within [0, 100] and a, b [ 100, 100]. Many of the colors that the local linear regression maps to will be outside the gamut of standard monitors or printers, and thus many points will again be clipped upon display, leading to some flatness in extreme color regions of an image. Because the local linear regression grid extrapolates to the entire space, the average distance between output colors is much greater than the average distance between output colors from the IME grid. This can lead to false edges in local linear regression mapped images. In Fig. 5, a daylight photograph of a building (from the Foveon Image Gallery) is transformed using a D65 to sunset color transformation. The left transformations are with local linear regression, and the right transformations are with IME estimation. In the context of the image, the transformations look quite similar, with the most noticeable differences occurring in the brightest areas. In Fig. 6, a daylight photograph of a landscape is transformed from daylight (D65) to sunset using four different transformations. In both Fig. 5 and Fig. 6 the brightest regions of the local linear regression may strike some viewers as too bright compared to the rest of the image s palette. In Fig. 7, a clearer difference between the estimation methods is seen in this application of a soft white incandescent to cloudy color transformation. The specular highlights on the red pepper have turned blue in the IME images, and this will strike most viewers as unnatural. Though the off-color specular highlights are a very tiny portion of the image, they give the sense that the entire image is too blue, due to human s extreme sensitivity to highlights. ess offensive, but also unnatural, is the flat bright white region of the foreground garlic in the local linear regression images, which is inconsistent with the rest of the palette. Though not too

6 Figure 3. Plots show how the neutral CIEab axis is transformed under the estimated color mapping for D65 illumination to a sunset illumination. eft: local linear regression using 4, 6, and 9 neighbors (from top to bottom). Right: IME using 4, 6, and 9 neighbors (from top to bottom).

7 Figure 4. Both plots show the 3D grid mapping from D65 to a sunset illumination. eft: local linear regression output values corresponding to the 3D UT grid points, using a neighborhood of six of the twenty-four original input samples. Right: IME output values corresponding to the 3D UT grid points, using a neighborhood of six of the twenty-four original input samples. disagreeable, the extrapolated local linear regression highlights and bright areas seem again too bright for the color cast of the rest of the image. The differences between the six and nine neighbor images are not very noticeable in the images shown. 6. DISCUSSION In this paper we have investigated issues arising from estimating color transformations using a 3D UT based on relatively few samples. We have shown that either local linear regression or IME estimation can result in sensible transformations for realistic image conversions. An important estimation issue for this application is the interpolative vs. extrapolative nature of the estimation. Interpolation is severely limited by its very nature; for neutral images the interpolation fit may better reflect the desired transformation, but for images with extreme color values any interpolative method will lead to clipping and objects can acquire an appearance of flatness due to lost contrast. The clipping due to interpolation can also cause miscolored specular reflections to which the eye is extremely sensitive. Other regression or smoothing algorithms could be used to estimate the 3D UT. Due to the sparsity of data samples, polynomial or other spline approaches may have a negative extrapolative effect, resulting in high variance of the estimated color transformation for small changes in algorithm parameters. Further, extrapolative estimation risks assigning extreme output color values and failing to consistently capture the sense of the desired transformation. The key issue for this application is finding a middle way between interpolating and extrapolating that retains the sense of the transformation, and optimally fills the output colorspace, resulting in minimal clipping. Further, since the goal is an automated system that does not require the user to carefully analyze different estimated transformations, the estimation method should lead to a sensible transformation for a large class of possible original sample pairs. This goal is more likely to be achieved with estimation methods that are robust, in the sense that small changes in the estimation parameters (such as changing the neighborhood size) lead to small changes in the estimated transformation. Such robustness will help ensure consistently reasonable estimation behavior over a large class of input sample pairs.

8 Figure 5. Top left: original daylight (D65) image. eft: original image transformed using D65 to sunset mapping estimated with local linear regression (middle with six neighbors and bottom with nine neighbors). Right: original image transformed using D65 to sunset mapping estimated with IME regression (middle with six neighbors and bottom with nine neighbors).

9 Figure 6. Top left: original daylight (D65) image. eft: original image transformed using D65 to sunset mapping estimated with local linear regression (middle with six neighbors and bottom with nine neighbors). Right: original image transformed using D65 to sunset mapping estimated with IME regression (middle with six neighbors and bottom with nine neighbors).

10 Figure 7. Top left: original image taken under unknown lighting. eft: original image transformed using soft white incandescent to cloudy mapping estimated with local linear regression (middle with six neighbors and bottom with nine neighbors). Right: original image transformed using soft white incandescent to cloudy mapping estimated with IME regression (middle with six neighbors and bottom with nine neighbors).

11 Defining color image enhancements via 3D UT s based on a small set of sample pairs is an interesting and easy way to create custom image enhancements. The statistical estimation used must be robust for this to be useful in practice. REFERENCES 1. B. H. Schmitt and A. Simonson, Marketing Aesthetics: The Strategic Management of Brands, Identity and Image, Free Press, V. Postrel, The Substance of Style: How the Rise of Aesthetic Value Is Remaking Commerce, Culture, and Consciousness, Harpers-Collins, ICC webpage, ColorThink 2.1.2, Y. Chang and S. Saito, Example-based color stylization based on categorical perception, Proc. of the First Symposium on Applied Perception in Graphics and Visualization, pp , E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, Color transfer between images, IEEE Computer Graphics and Applications 21, pp , September A. Hertzmann, C. E. Jacobs, B. C. N. Oliver, and D. H. Salesin, Image analogies, SIGGRAPH, pp , D. M. A. Rizzi and. D. Carli, Multilevel brownian retinex colour correction, Machine Graphics and Vision, D. M. A. R. C. Gatta, S. Vacchi, Proposal for a new method to speed up local color correction algorithms, Proc. of the SPIE Y. i, J. A. Bilmes, and. G. Shapiro, Object class recognition using images of abstract regions, Proc. of the 17th Intl. Conf. on Pattern Recognition, T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical earning, Springer-Verlag, New York, T. Hastie and C. oader, ocal regression: automatic kernel carpentry, Statistical Science 8(2), pp , H. Kang, Color Technology for Electronic Imaging Devices, SPIE Press, United States of America, M. R. Gupta, Inverting color transformations, Proceedings of the SPIE Conf. on Computational Imaging, 2004.

Color Management Experiments using Adaptive Neighborhoods for Local Regression

Color Management Experiments using Adaptive Neighborhoods for Local Regression COLOR MANAGEMENT EXPERIMENTS USING ADAPTIVE NEIGHBORHOODS FOR LOCAL REGRESSION 1 Color Management Experiments using Adaptive Neighborhoods for Local Regression Erika Chin, University of Virginia, Charlottesville,

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

Black generation using lightness scaling

Black generation using lightness scaling Black generation using lightness scaling Tomasz J. Cholewo Software Research, Lexmark International, Inc. 740 New Circle Rd NW, Lexington, KY 40511 e-mail: cholewo@lexmark.com ABSTRACT This paper describes

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

Lattice Regression. Abstract

Lattice Regression. Abstract Lattice Regression Eric Garcia Department of Electrical Engineering University of Washington Seattle, WA 98195 garciaer@ee.washington.edu Maya Gupta Department of Electrical Engineering University of Washington

More information

High Dynamic Range Imaging.

High Dynamic Range Imaging. High Dynamic Range Imaging High Dynamic Range [3] In photography, dynamic range (DR) is measured in exposure value (EV) differences or stops, between the brightest and darkest parts of the image that show

More information

Multimedia Technology CHAPTER 4. Video and Animation

Multimedia Technology CHAPTER 4. Video and Animation CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

More information

Motivation. Gray Levels

Motivation. Gray Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Motivation. Intensity Levels

Motivation. Intensity Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Analysis and extensions of the Frankle-McCann

Analysis and extensions of the Frankle-McCann Analysis and extensions of the Frankle-McCann Retinex algorithm Jounal of Electronic Image, vol.13(1), pp. 85-92, January. 2004 School of Electrical Engineering and Computer Science Kyungpook National

More information

TWO APPROACHES IN SCANNER-PRINTER CALIBRATION: COLORIMETRIC SPACE-BASED VS. CLOSED-LOOP.

TWO APPROACHES IN SCANNER-PRINTER CALIBRATION: COLORIMETRIC SPACE-BASED VS. CLOSED-LOOP. TWO APPROACHES I SCAER-PRITER CALIBRATIO: COLORIMETRIC SPACE-BASED VS. CLOSED-LOOP. V. Ostromoukhov, R.D. Hersch, C. Péraire, P. Emmel, I. Amidror Swiss Federal Institute of Technology (EPFL) CH-15 Lausanne,

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

Chapter 18. Geometric Operations

Chapter 18. Geometric Operations Chapter 18 Geometric Operations To this point, the image processing operations have computed the gray value (digital count) of the output image pixel based on the gray values of one or more input pixels;

More information

Characterizing and Controlling the. Spectral Output of an HDR Display

Characterizing and Controlling the. Spectral Output of an HDR Display Characterizing and Controlling the Spectral Output of an HDR Display Ana Radonjić, Christopher G. Broussard, and David H. Brainard Department of Psychology, University of Pennsylvania, Philadelphia, PA

More information

Rendering and Radiosity. Introduction to Design Media Lecture 4 John Lee

Rendering and Radiosity. Introduction to Design Media Lecture 4 John Lee Rendering and Radiosity Introduction to Design Media Lecture 4 John Lee Overview Rendering is the process that creates an image from a model How is it done? How has it been developed? What are the issues

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

HiTi. Color Management Utility Instructions

HiTi. Color Management Utility Instructions HiTi Color Management Utility Instructions Benefits of using color management. Improve the consistency of printed colors against the colors displayed on the display screen. Users can also remotely fine

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both

More information

Introduction to color science

Introduction to color science Introduction to color science Trichromacy Spectral matching functions CIE XYZ color system xy-chromaticity diagram Color gamut Color temperature Color balancing algorithms Digital Image Processing: Bernd

More information

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 CSE 167: Introduction to Computer Graphics Lecture #7: Color and Shading Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday,

More information

Geodesic Based Ink Separation for Spectral Printing

Geodesic Based Ink Separation for Spectral Printing Geodesic Based Ink Separation for Spectral Printing Behnam Bastani*, Brian Funt**, *Hewlett-Packard Company, San Diego, CA, USA **Simon Fraser University, Vancouver, BC, Canada Abstract An ink separation

More information

Game Programming. Bing-Yu Chen National Taiwan University

Game Programming. Bing-Yu Chen National Taiwan University Game Programming Bing-Yu Chen National Taiwan University What is Computer Graphics? Definition the pictorial synthesis of real or imaginary objects from their computer-based models descriptions OUTPUT

More information

(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology

(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology lecture 23 (0, 1, 1) (0, 0, 0) (0, 0, 1) (0, 1, 1) (1, 1, 1) (1, 1, 0) (0, 1, 0) hue - which ''? saturation - how pure? luminance (value) - intensity What is light? What is? Light consists of electromagnetic

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 2 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

Miniaturized Camera Systems for Microfactories

Miniaturized Camera Systems for Microfactories Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland

More information

Painting Tiling Foliage Textures

Painting Tiling Foliage Textures Painting Tiling Foliage Textures Jungle 3D can do many things. One of our favorites is to paint tiling textures of foliage. These dimensional foliage textures can create highly realistic and detailed forest

More information

Computational Photography and Capture: (Re)Coloring. Gabriel Brostow & Tim Weyrich TA: Frederic Besse

Computational Photography and Capture: (Re)Coloring. Gabriel Brostow & Tim Weyrich TA: Frederic Besse Computational Photography and Capture: (Re)Coloring Gabriel Brostow & Tim Weyrich TA: Frederic Besse Week Date Topic Hours 1 12-Jan Introduction to Computational Photography and Capture 1 1 14-Jan Intro

More information

Color Content Based Image Classification

Color Content Based Image Classification Color Content Based Image Classification Szabolcs Sergyán Budapest Tech sergyan.szabolcs@nik.bmf.hu Abstract: In content based image retrieval systems the most efficient and simple searches are the color

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 8 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Dynamic Range and Weber s Law HVS is capable of operating over an enormous dynamic range, However, sensitivity is far from uniform over this range Example:

More information

Broad field that includes low-level operations as well as complex high-level algorithms

Broad field that includes low-level operations as well as complex high-level algorithms Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and

More information

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows are what we normally see in the real world. If you are near a bare halogen bulb, a stage spotlight, or other

More information

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters EECS 556 Image Processing W 09 Image enhancement Smoothing and noise removal Sharpening filters What is image processing? Image processing is the application of 2D signal processing methods to images Image

More information

critical theory Computer Science

critical theory Computer Science Art/Science Shading, Materials, Collaboration Textures Example title Artists In the recommend real world, two the main following: factors determine the appearance of a surface: basic understanding what

More information

High Information Rate and Efficient Color Barcode Decoding

High Information Rate and Efficient Color Barcode Decoding High Information Rate and Efficient Color Barcode Decoding Homayoun Bagherinia and Roberto Manduchi University of California, Santa Cruz, Santa Cruz, CA 95064, USA {hbagheri,manduchi}@soe.ucsc.edu http://www.ucsc.edu

More information

Efficient Regression for Computational Imaging: from Color Management to Omnidirectional Superresolution

Efficient Regression for Computational Imaging: from Color Management to Omnidirectional Superresolution Efficient Regression for Computational Imaging: from Color Management to Omnidirectional Superresolution Maya R. Gupta Eric Garcia Raman Arora Regression 2 Regression Regression Linear Regression: fast,

More information

Lecture 15: Shading-I. CITS3003 Graphics & Animation

Lecture 15: Shading-I. CITS3003 Graphics & Animation Lecture 15: Shading-I CITS3003 Graphics & Animation E. Angel and D. Shreiner: Interactive Computer Graphics 6E Addison-Wesley 2012 Objectives Learn that with appropriate shading so objects appear as threedimensional

More information

Nonparametric Regression

Nonparametric Regression Nonparametric Regression John Fox Department of Sociology McMaster University 1280 Main Street West Hamilton, Ontario Canada L8S 4M4 jfox@mcmaster.ca February 2004 Abstract Nonparametric regression analysis

More information

this is processed giving us: perceived color that we actually experience and base judgments upon.

this is processed giving us: perceived color that we actually experience and base judgments upon. color we have been using r, g, b.. why what is a color? can we get all colors this way? how does wavelength fit in here, what part is physics, what part is physiology can i use r, g, b for simulation of

More information

CS770/870 Spring 2017 Color and Shading

CS770/870 Spring 2017 Color and Shading Preview CS770/870 Spring 2017 Color and Shading Related material Cunningham: Ch 5 Hill and Kelley: Ch. 8 Angel 5e: 6.1-6.8 Angel 6e: 5.1-5.5 Making the scene more realistic Color models representing the

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Color Correction between Gray World and White Patch

Color Correction between Gray World and White Patch Color Correction between Gray World and White Patch Alessandro Rizzi, Carlo Gatta, Daniele Marini a Dept. of Information Technology - University of Milano Via Bramante, 65-26013 Crema (CR) - Italy - E-mail:

More information

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed

More information

Computer Vision. The image formation process

Computer Vision. The image formation process Computer Vision The image formation process Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2016/2017 The image

More information

Nearest Neighbor Predictors

Nearest Neighbor Predictors Nearest Neighbor Predictors September 2, 2018 Perhaps the simplest machine learning prediction method, from a conceptual point of view, and perhaps also the most unusual, is the nearest-neighbor method,

More information

Blacksburg, VA July 24 th 30 th, 2010 Georeferencing images and scanned maps Page 1. Georeference

Blacksburg, VA July 24 th 30 th, 2010 Georeferencing images and scanned maps Page 1. Georeference George McLeod Prepared by: With support from: NSF DUE-0903270 in partnership with: Geospatial Technician Education Through Virginia s Community Colleges (GTEVCC) Georeference The process of defining how

More information

MET71 COMPUTER AIDED DESIGN

MET71 COMPUTER AIDED DESIGN UNIT - II BRESENHAM S ALGORITHM BRESENHAM S LINE ALGORITHM Bresenham s algorithm enables the selection of optimum raster locations to represent a straight line. In this algorithm either pixels along X

More information

Historical Handwritten Document Image Segmentation Using Background Light Intensity Normalization

Historical Handwritten Document Image Segmentation Using Background Light Intensity Normalization Historical Handwritten Document Image Segmentation Using Background Light Intensity Normalization Zhixin Shi and Venu Govindaraju Center of Excellence for Document Analysis and Recognition (CEDAR), State

More information

An introduction to 3D image reconstruction and understanding concepts and ideas

An introduction to 3D image reconstruction and understanding concepts and ideas Introduction to 3D image reconstruction An introduction to 3D image reconstruction and understanding concepts and ideas Samuele Carli Martin Hellmich 5 febbraio 2013 1 icsc2013 Carli S. Hellmich M. (CERN)

More information

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo Computer Graphics Bing-Yu Chen National Taiwan University The University of Tokyo Introduction The Graphics Process Color Models Triangle Meshes The Rendering Pipeline 1 What is Computer Graphics? modeling

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Introduction to 3D Concepts

Introduction to 3D Concepts PART I Introduction to 3D Concepts Chapter 1 Scene... 3 Chapter 2 Rendering: OpenGL (OGL) and Adobe Ray Tracer (ART)...19 1 CHAPTER 1 Scene s0010 1.1. The 3D Scene p0010 A typical 3D scene has several

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

Nonlinear Multiresolution Image Blending

Nonlinear Multiresolution Image Blending Nonlinear Multiresolution Image Blending Mark Grundland, Rahul Vohra, Gareth P. Williams and Neil A. Dodgson Computer Laboratory, University of Cambridge, United Kingdom October, 26 Abstract. We study

More information

TSP Art. Craig S. Kaplan School of Computer Science University of Waterloo

TSP Art. Craig S. Kaplan School of Computer Science University of Waterloo TSP Art Craig S. Kaplan School of Computer Science University of Waterloo csk@cgl.uwaterloo.ca Robert Bosch Department of Mathematics Oberlin College bobb@cs.oberlin.edu Abstract Bosch and Herman recently

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Supervised Sementation: Pixel Classification

Supervised Sementation: Pixel Classification Supervised Sementation: Pixel Classification Example: A Classification Problem Categorize images of fish say, Atlantic salmon vs. Pacific salmon Use features such as length, width, lightness, fin shape

More information

Fog and Cloud Effects. Karl Smeltzer Alice Cao John Comstock

Fog and Cloud Effects. Karl Smeltzer Alice Cao John Comstock Fog and Cloud Effects Karl Smeltzer Alice Cao John Comstock Goal Explore methods of rendering scenes containing fog or cloud-like effects through a variety of different techniques Atmospheric effects make

More information

ICC color management for print production

ICC color management for print production ICC color management for print production TAGA Annual Technical Conference 2002 W Craig Revie Principal Consultant Fuji Film Electronic Imaging Limited ICC Chair of the Graphic Arts Special Interest Group

More information

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7) 5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?

More information

Blue Sky Detection for Picture Quality Enhancement

Blue Sky Detection for Picture Quality Enhancement Blue Sky Detection for Picture Quality Enhancement Bahman Zafarifar 2,3 and Peter H. N. de With 1,2 1 Eindhoven University of Technology, PO Box 513, 5600 MB, The Netherlands, {B.Zafarifar, P.H.N.de.With}@tue.nl

More information

Extraction of Color and Texture Features of an Image

Extraction of Color and Texture Features of an Image International Journal of Engineering Research ISSN: 2348-4039 & Management Technology July-2015 Volume 2, Issue-4 Email: editor@ijermt.org www.ijermt.org Extraction of Color and Texture Features of an

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

A Review on Plant Disease Detection using Image Processing

A Review on Plant Disease Detection using Image Processing A Review on Plant Disease Detection using Image Processing Tejashri jadhav 1, Neha Chavan 2, Shital jadhav 3, Vishakha Dubhele 4 1,2,3,4BE Student, Dept. of Electronic & Telecommunication Engineering,

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

3D graphics, raster and colors CS312 Fall 2010

3D graphics, raster and colors CS312 Fall 2010 Computer Graphics 3D graphics, raster and colors CS312 Fall 2010 Shift in CG Application Markets 1989-2000 2000 1989 3D Graphics Object description 3D graphics model Visualization 2D projection that simulates

More information

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye Ray Tracing What was the rendering equation? Motivate & list the terms. Relate the rendering equation to forward ray tracing. Why is forward ray tracing not good for image formation? What is the difference

More information

Nonlinear Image Interpolation using Manifold Learning

Nonlinear Image Interpolation using Manifold Learning Nonlinear Image Interpolation using Manifold Learning Christoph Bregler Computer Science Division University of California Berkeley, CA 94720 bregler@cs.berkeley.edu Stephen M. Omohundro'" Int. Computer

More information

CHAPTER 4 SEMANTIC REGION-BASED IMAGE RETRIEVAL (SRBIR)

CHAPTER 4 SEMANTIC REGION-BASED IMAGE RETRIEVAL (SRBIR) 63 CHAPTER 4 SEMANTIC REGION-BASED IMAGE RETRIEVAL (SRBIR) 4.1 INTRODUCTION The Semantic Region Based Image Retrieval (SRBIR) system automatically segments the dominant foreground region and retrieves

More information

Graphics for VEs. Ruth Aylett

Graphics for VEs. Ruth Aylett Graphics for VEs Ruth Aylett Overview VE Software Graphics for VEs The graphics pipeline Projections Lighting Shading VR software Two main types of software used: off-line authoring or modelling packages

More information

Using Image's Processing Methods in Bio-Technology

Using Image's Processing Methods in Bio-Technology Int. J. Open Problems Compt. Math., Vol. 2, No. 2, June 2009 Using Image's Processing Methods in Bio-Technology I. A. Ismail 1, S. I. Zaki 2, E. A. Rakha 3 and M. A. Ashabrawy 4 1 Dean of Computer Science

More information

Combining Abstract Images using Texture Transfer

Combining Abstract Images using Texture Transfer BRIDGES Mathematical Connections in Art, Music, and Science Combining Abstract Images using Texture Transfer Gary R. Greenfield Department of Mathematics & Computer Science University of Richmond Richmond,

More information

Scientific imaging of Cultural Heritage: Minimizing Visual Editing and Relighting

Scientific imaging of Cultural Heritage: Minimizing Visual Editing and Relighting Scientific imaging of Cultural Heritage: Minimizing Visual Editing and Relighting Roy S. Berns Supported by the Andrew W. Mellon Foundation Colorimetry Numerical color and quantifying color quality b*

More information

Brightness and geometric transformations

Brightness and geometric transformations Brightness and geometric transformations Václav Hlaváč Czech Technical University in Prague Czech Institute of Informatics, Robotics and Cybernetics 166 36 Prague 6, Jugoslávských partyzánů 1580/3, Czech

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING

VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING Y. Kuzu a, O. Sinram b a Yıldız Technical University, Department of Geodesy and Photogrammetry Engineering 34349 Beşiktaş Istanbul, Turkey - kuzu@yildiz.edu.tr

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Multimedia Information Retrieval

Multimedia Information Retrieval Multimedia Information Retrieval Prof Stefan Rüger Multimedia and Information Systems Knowledge Media Institute The Open University http://kmi.open.ac.uk/mmis Why content-based? Actually, what is content-based

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Understanding Gridfit

Understanding Gridfit Understanding Gridfit John R. D Errico Email: woodchips@rochester.rr.com December 28, 2006 1 Introduction GRIDFIT is a surface modeling tool, fitting a surface of the form z(x, y) to scattered (or regular)

More information

Texture Mapping. Images from 3D Creative Magazine

Texture Mapping. Images from 3D Creative Magazine Texture Mapping Images from 3D Creative Magazine Contents Introduction Definitions Light And Colour Surface Attributes Surface Attributes: Colour Surface Attributes: Shininess Surface Attributes: Specularity

More information

Color Characterization and Calibration of an External Display

Color Characterization and Calibration of an External Display Color Characterization and Calibration of an External Display Andrew Crocker, Austin Martin, Jon Sandness Department of Math, Statistics, and Computer Science St. Olaf College 1500 St. Olaf Avenue, Northfield,

More information

Differential Structure in non-linear Image Embedding Functions

Differential Structure in non-linear Image Embedding Functions Differential Structure in non-linear Image Embedding Functions Robert Pless Department of Computer Science, Washington University in St. Louis pless@cse.wustl.edu Abstract Many natural image sets are samples

More information

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison CHAPTER 9 Classification Scheme Using Modified Photometric Stereo and 2D Spectra Comparison 9.1. Introduction In Chapter 8, even we combine more feature spaces and more feature generators, we note that

More information

Image enhancement for face recognition using color segmentation and Edge detection algorithm

Image enhancement for face recognition using color segmentation and Edge detection algorithm Image enhancement for face recognition using color segmentation and Edge detection algorithm 1 Dr. K Perumal and 2 N Saravana Perumal 1 Computer Centre, Madurai Kamaraj University, Madurai-625021, Tamilnadu,

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant

An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Gauss-Sigmoid Neural Network

Gauss-Sigmoid Neural Network Gauss-Sigmoid Neural Network Katsunari SHIBATA and Koji ITO Tokyo Institute of Technology, Yokohama, JAPAN shibata@ito.dis.titech.ac.jp Abstract- Recently RBF(Radial Basis Function)-based networks have

More information

Announcements. Lighting. Camera s sensor. HW1 has been posted See links on web page for readings on color. Intro Computer Vision.

Announcements. Lighting. Camera s sensor. HW1 has been posted See links on web page for readings on color. Intro Computer Vision. Announcements HW1 has been posted See links on web page for readings on color. Introduction to Computer Vision CSE 152 Lecture 6 Deviations from the lens model Deviations from this ideal are aberrations

More information

Design Visualization with Autodesk Alias, Part 2

Design Visualization with Autodesk Alias, Part 2 Design Visualization with Autodesk Alias, Part 2 Wonjin John Autodesk Who am I? Wonjin John is an automotive and industrial designer. Born in Seoul, Korea, he moved to United States after finishing engineering

More information

Variations of images to increase their visibility

Variations of images to increase their visibility Variations of images to increase their visibility Amelia Carolina Sparavigna Department of Applied Science and Technology Politecnico di Torino, Torino, Italy The calculus of variations applied to the

More information

Spectral Images and the Retinex Model

Spectral Images and the Retinex Model Spectral Images and the Retine Model Anahit Pogosova 1, Tuija Jetsu 1, Ville Heikkinen 2, Markku Hauta-Kasari 1, Timo Jääskeläinen 2 and Jussi Parkkinen 1 1 Department of Computer Science and Statistics,

More information

Cosmic Ray Shower Profile Track Finding for Telescope Array Fluorescence Detectors

Cosmic Ray Shower Profile Track Finding for Telescope Array Fluorescence Detectors Cosmic Ray Shower Profile Track Finding for Telescope Array Fluorescence Detectors High Energy Astrophysics Institute and Department of Physics and Astronomy, University of Utah, Salt Lake City, Utah,

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information