Textured Image Based Painterly Rendering Under Arbitrary Lighting

Similar documents
Types of Computer Painting

Computer Graphics. Si Lu. Fall er_graphics.htm 10/11/2017

Non-Photorealistic Rendering

03 Vector Graphics. Multimedia Systems. 2D and 3D Graphics, Transformations

Making Impressionist Paintings Out of Real Images

Topics and things to know about them:

Non-photorealistic Rendering

CS559: Computer Graphics. Lecture 6: Painterly Rendering and Edges Li Zhang Spring 2008

Artistic Rendering of Function-based Shape Models

Announcements. Written Assignment 2 is out see the web page. Computer Graphics

Working with the BCC Brick Generator

Paint by Numbers and Comprehensible Rendering of 3D Shapes

CSCI 4972/6963 Advanced Computer Graphics Quiz 2 Tuesday April 17, 2007 noon-1:30pm

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Distributed Ray Tracing

Prof. Feng Liu. Fall /25/2018

Working with the BCC Bump Map Generator

Non-Photorealistic Rendering (NPR) Christian Richardt, Rainbow Group

Image-Space Painterly Rendering

A Line Drawings Degradation Model for Performance Characterization

3D Rasterization II COS 426

Motion Estimation. There are three main types (or applications) of motion estimation:

UV Mapping to avoid texture flaws and enable proper shading

M O T I O N A N D D R A W I N G

CHAPTER 1 Graphics Systems and Models 3

A Color Interpolation Method for Bayer Filter Array Images Based on Direction Flag

CSE528 Computer Graphics: Theory, Algorithms, and Applications

Projections. Brian Curless CSE 457 Spring Reading. Shrinking the pinhole. The pinhole camera. Required:

Lecturer Athanasios Nikolaidis

CMSC 435/634: Introduction to Graphics

Anno accademico 2006/2007. Davide Migliore

Scan Conversion- Polygons

Synthesis of Oil-Style Paintings

Pen Tool, Fill Layers, Color Range, Levels Adjustments, Magic Wand tool, and shadowing techniques

Information Coding / Computer Graphics, ISY, LiTH. Splines

Edge and local feature detection - 2. Importance of edge detection in computer vision

LEVEL 1 ANIMATION ACADEMY2010

Computer Graphics I Lecture 11

Modeling with CMU Mini-FEA Program

There we are; that's got the 3D screen and mouse sorted out.

Street Artist Teacher support materials Hour of Code 2017

Character Modeling COPYRIGHTED MATERIAL

Paint/Draw Tools. Foreground color. Free-form select. Select. Eraser/Color Eraser. Fill Color. Color Picker. Magnify. Pencil. Brush.

Photo-realism Fundamentals

Creating Digital Illustrations for Your Research Workshop III Basic Illustration Demo

What and Why Transformations?

Krita Vector Tools

Computer Graphics Fundamentals. Jon Macey

COMP371 COMPUTER GRAPHICS

Beginning Paint 3D A Step by Step Tutorial. By Len Nasman

COMPUTER GRAPHICS COURSE. Rendering Pipelines

TSP Art. Craig S. Kaplan School of Computer Science University of Waterloo

COS 116 The Computational Universe Laboratory 10: Computer Graphics

Shading Languages. Seminar Computer Graphics. Markus Kummerer

Spring 2012 Final. CS184 - Foundations of Computer Graphics. University of California at Berkeley

Computer Graphics. Instructor: Oren Kapah. Office Hours: T.B.A.

6.837 Introduction to Computer Graphics Quiz 2 Thursday November 20, :40-4pm One hand-written sheet of notes allowed

Glossary alternate interior angles absolute value function Example alternate exterior angles Example angle of rotation Example

Pop Quiz 1 [10 mins]

Case Study: The Pixar Story. By Connor Molde Comptuer Games & Interactive Media Year 1

Bump Mapping Which one of these two image has a better visual effect?

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Non-Photorealistic Experimentation Jhon Adams

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources.

THE combination of Darwin s theory and computer graphics

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Assignment 6: Ray Tracing

More Texture Mapping. Texture Mapping 1/46

Tutorial 4: Texture Mapping Techniques

CGS 3034 Lecture 2 Ball Bounce

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

CGI and Pixar. In my opinion, one of the most fascinating applications of computer graphics

Images from 3D Creative Magazine. 3D Modelling Systems

Physically Realistic Ray Tracing

ME COMPUTER AIDED DESIGN COMPUTER AIDED DESIGN 2 MARKS Q&A

Turn your movie file into the homework folder on the server called Lights, Camera, Action.

Computer Graphics Introduction. Taku Komura

Distribution Ray Tracing. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell

Rendering and Radiosity. Introduction to Design Media Lecture 4 John Lee

Light-Dependent Texture Mapping

GLOBAL EDITION. Interactive Computer Graphics. A Top-Down Approach with WebGL SEVENTH EDITION. Edward Angel Dave Shreiner

Multimedia Technology CHAPTER 4. Video and Animation

Buffers, Textures, Compositing, and Blending. Overview. Buffers. David Carr Virtual Environments, Fundamentals Spring 2005 Based on Slides by E.

Visualising Solid Shapes

Photorealism vs. Non-Photorealism in Computer Graphics

critical theory Computer Science

Reading. 8. Distribution Ray Tracing. Required: Watt, sections 10.6,14.8. Further reading:

3 Polygonal Modeling. Getting Started with Maya 103

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

DOOM 3 : The guts of a rendering engine

In this lesson, you ll learn how to:

Image Preparation for NMPCA Websites

Edges, interpolation, templates. Nuno Vasconcelos ECE Department, UCSD (with thanks to David Forsyth)

3D & 2D TREES. Andy Simmons Andy Simmons

Approaches to Simulate the Operation of the Bending Machine

Silhouette Projects The Tools of Silhouette

Dave s Phenomenal Maya Cheat Sheet The 7 Default Menus By Dave

Ray Tracing. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

y = f(x) x (x, f(x)) f(x) g(x) = f(x) + 2 (x, g(x)) 0 (0, 1) 1 3 (0, 3) 2 (2, 3) 3 5 (2, 5) 4 (4, 3) 3 5 (4, 5) 5 (5, 5) 5 7 (5, 7)

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim

Transcription:

Textured Image Based Painterl Rering Under Arbitrar Lighting George ElKoura Universit of Toronto gelkoura@cs.toronto.edu Abstract Painterl rering stles have received some attention latel as an interesting alternative to photorealistic image processing and snthesis. State of the art painterl rerings, while producing results that superficiall resemble certain artistic stles, such as impressionistic or expressionistic, fail to conve the implicit detail provided b the artist s choices of brushes and canvas. In this paper we present techniques to simulate the effects of paint thickness and canvas texture as the interact under arbitrar lighting conditions. We also ext previous works b allowing the user to specif brush nibs of an size, shape and texture. 1. Introduction Current techniques emploed in image based painterl rerings attempt to simulate artistic stles b simulating a paint stroke s color, size, length, spacing and curvature [1]. However, phsical paint artists have man more variables at their disposal. The can conve their art b their choice of paint thickness in each stroke, b their use of canvas texture, and the luckier ones even have the choice of lighting conditions under which their paintings are displaed. We present techniques that ext previous work to enable the representation of paint thickness, canvas, and lighting. Moreover, we present a technique to generalize the shape of the brush nib. In section 2 we discuss previous work. In section 3 we discuss the generalization of the brush nib s shape. Sections 4, 5 and 6 discuss adding paint thickness, canvas and lighting respectivel. We discuss our implementation and results in sections 7 and 8. Finall we discuss some ideas for future work in section 9. 2. Previous Work Hertzmann et al. presented techniques to generate image-based painterl rerings that make use of curved brush strokes and brushes of varing size. This work was later exted b adapting some video-coherence Figure 1 A painted butterfl rered with brush depth and canvas thickness. techniques that were originall presented b Litwinowicz in [9]. While this work allowed for varing brush sizes, it did not make provisions for varing the brush shape. We address the issues of using a generalized brush shape in this paper. The idea of simulating more realistic brush strokes is not new. Lewis used signal processing approaches to give strokes a texture in 1984 [7], and Turner Whitted, in 1983, introduced the idea of sweeping an anti-aliased circle along a curve to produce anti-aliased lines [8]. Whitted also suggested that sweeping other kinds of primitives would produce other interesting effects, such as a line that looks as though it were made of glass for example. Modern, commercial image processing software, such as Corel Corporation s PHOTO-PAINT, allows for the simulation of a variet of paint and canvas effects. The paint effects seem to be generated b processing the image with a convolution filter, and do not seem to attempt creating a realistic looking painting. Simulating the canvas in this package is done, it seems, b using the luminance of a pattern to emboss the image and crossdissolve using user-specified parameters. Again, these

techniques, though interesting, do not make an provisions for arbitrar lighting and do not attempt to achieve a 3D look. Central to our technique is the use of displacement maps to give the appearance of depth to a flat image. Displacement maps are a generalization on the idea of texture maps and were presented as part of the Rees rering architecture [6], which was of great help in the development of this work. 3. Using a general brush nib Hertzmann, in [1], uses an anti-aliased circle as the brush nib. We use a more general approach b allowing the user to use an arbitrar image as a nib. The nib template is used as a multiplicative filter on the choice of color obtained from the original picture. The choice of a circle obviates the need to properl orient the brush nib along the direction of the stroke. However, when an arbitrar shape is used, orienting the nib becomes critical. We require that the nib template chosen b the user be given in a vertical position, in other words, it should be pointing up. Then we compute the angle between the vector (0, 1) and tangent vector to the curve at the point at which we wish to rer the nib, and we rotate the nib b this angle. The rotation is computed as x' = xu ' = x 1 u 2 1 u 2 + u where ( x, ) are the un-rotated coordinates, ( x', ') are the corresponding rotated coordinates and u is the component of the derivative of the curve at the point ( x, ). This form, in general, is more efficient than explicitl computing the angle and the rotation matrix. We use NURBS for our stroke curves and computing the derivatives is fairl straightforward (see [4]). Appling this rotation significantl improves the qualit of the rered strokes, this should be obvious in the general case. However, even for a circle with noise applied, not performing this corrective rotation will wash-out the effect of the constant noise. One wa to get around this is to regenerate the noise on the brush nib for each dab, however this is unrealistic and does not represent the properties of brush thistles, which produce predictable dabs per stroke. Ideall, both would be implemented and left up to the user s discretion. 4. Paint Thickness One approach to generating a 3D texture look that interacts with arbitrar lighting is to generate normal information for each stroke and rer each stroke using simple bump-mapping techniques. This approach proved to be unsuccessful. The qualit of the results is in direct proportion to the qualit of the normals supplied b the user. The best wa to produce these normals is to force the user to model a 3D brush nib. Needless to sa, this puts too much onus on the user and provides for a ver frustrating experience. The second approach we tried, which proved to be much more successful, makes use of displacement maps [6]. Instead of generating normals, we generate a displacement map. The terms displacement maps and height maps are used Figure 2 Example of a nonsmmetric nois brush nib. interchangeabl. The displacement map is like a texture map that contains the height of each pixel. When geometr is rered with a displacement map, each point is displaced b the value indicated in the displacement map before finall being rered on screen. This approach has man advantages over the initial normals technique. First, it uses one plane instead of three, thus saving memor. Secondl, it allows for self-shadowing which bump-mapping does not. In other words, under certain lighting conditions, some brush strokes will be shadowed b other strokes. Lastl, and most importantl, it is much easier for a user to suppl a displacement map for a brush nib than it is to suppl the normals for the nib. The user onl supplies one image that represents the brush nib. The luminance from the image is used for height information at each pixel, and the shape is determined b setting a threshold on luminance. That is to sa, pixels with a luminance value above a certain threshold will contribute to the shape of the nib. 5. Canvas Texture Much of a painting s highlights under certain lighting conditions result from the interaction of the paint with the canvas. It is thus important, in order to produce convincing results, to simulate the canvas as well as the paint thickness. Other attempts at simulating the paint

(a) (c) Figure 3 Canvas height maps generated using different parameters. (a) and are 16 16 with height 0.7 and 0.2 respectivel. (c) is 4 4 with height 0.5. canvas use a patterned texture to brighten and darken certain parts of the image to produce the effect of a picture drawn on a canvas. Other luminance patterns produce pictures that have the appearance of being drawn on a brick wall for example [5]. The advantage of our technique is that the canvas onl affects the height of the brush stroke, not the color of the brush stroke, as in real world paintings. The other techniques cannot be arbitrar lit and do not show the subtle specular highlights seen with our technique. The canvas pattern in our sstem is onl applied to the height-plane of the image and onl appears during the rering of the displacement map. Before an of the strokes are rered, the canvas pattern is generated on the height-plane according to user settings. As paint stroke thickness is rered into this plane, we add the stroke thickness to the height at that location (which includes the canvas and previous strokes). Our initial implementation added and capped the sum for each stroke. However, for complicated stroke patterns, we quickl reach the cap and we up with a uniform (the maximum) height at each pixel and in essence losing all the height information. Instead we let the sum accumulate un-capped, and, after the painting is full rered, find the maximum height and divide all pixels b it. This normalization step produces great results. The user can specif the size of the canvas grid and the height of the canvas. Onl regular patterns are supported b the sstem, though there is nothing that precludes using an arbitrar, user-specified, canvas pattern. The canvas is generated b the algorithm shown in Algorithm 1. The variables patch_height, patch_width and canvas_height are user supplied parameters. The variable base_height is used to make sure that the canvas has at least some height throughout. The algorithm simpl breaks the image up into patches (of userspecified size) and uses quadratic roll-off to simulate the weaves. 6. Lighting the Painting Lighting the painting, after the color plane and the height plane have been generated, is taken care of b the rerer. We start with a single quadrilateral polgon that has the same aspect ratio as the original image, and a camera numpatchx image_width/patch_width numpatch image_height/patch_height for patchx = 0 to numpatchx for patch = 0 to numpatch for p = -patch_width/2 to patch_width/2 for q = -patch_height/2 to patch_height/2 if (patchx + patch) is odd then { horizontal patch } a patch_width 2 / 4 c canvas_height v (-c/a)p 2 + c; else { vertical patch } a (patch_height 2 / 4 c canvas_height v = (-c/a)q 2 + c pixel_height(p, q) v + base_height Algorithm 1 Routing for generating canvas height maps.

(a) with the same resolution, and appl the color plane as a texture map and the height plane as a displacement map. We can then place lights around the polgon to light our painting appropriatel and then s it to a Rees [6] based rerer. After the displacement map and the texture map have been generated, what can be done with the rerer is limitless. Rees rerers work b converting all primitives to micropolgons and adaptivel subdividing them as required, based on the footprint occupied b the micropolgon in the image. The rering pipeline allows for these micropolgons to be displaced before the are rered. We take advantage of this b letting the rer do the hard work of figuring out how much our polgon needs to be subdivided and then we displace the micropolgons according to the generated displacement map. Finall the rerer takes the moved micropolgons and rerers them with accurate lighting and shadows. 7. Implementation The techniques described in this paper were implemented as a custom Composite Operator (COP) in Side Effect Software s Houdini Animation Tools. We used COPS version 2.0, which was still in production during the development of this work. This presented its own challenges, including the necessit to implement image manipulation functions that would be readil available in other libraries designed for computer vision applications. However the power and flexibilit of this architecture were well worth the extra effort spent in implementing some simple routines. The images were then rered with Mantra, a rerer based on the Rees architecture, similar to Pixar s RerMan. We used a few variations on Hertzmann s modified algorithm. Instead of using summed-area tables for blurring (and video coherence), we used the slower Gaussian blur. Summed-area tables are exceptional when performance is paramount, but are disadvantageous when memor footprints must be kept low. Also, instead of using cubic B-splines, we used cubic NRUBS curves with a uniform basis. 8. Results (c) Figure 4 (a) The original dog picture. The generated heigh map. (c) The final rer of the painting. In this section we discuss results obtained using our sstem. In general we find the results good, though we believe that it ma be hard to convince someone that the are real paintings. The shortcomings of the sstem, and

suggestions for ameliorating them, are discussed in the section following this one. More results and a more thorough discussion of the result can be found at http://www.dgp.toronto.edu/~gelkoura/proj/index.html. 8.1. The Butterfl The butterfl results are shown on the color plates at the of the paper. We first show what the image would be like rered without an height information at all. This is similar to the effect that would be generated b other painterl rering algorithms, and looks rather flat. We then show what the image would be like rered with onl the height information obtained from the brush strokes (and not the canvas). The painting is rered with two light sources at the top corners looking towards the painting. Without the canvas, the painting looks as though it has been painted on a perfect surface. This is because this particular picture has a constant colored background. The painting doesn t look quite right when onl using the brush stroke heights. If we don t use the brush height information at all and onl use the canvas height, the painting looks more convincing than the previous case. This is because this loosel simulates thin paint on a thick canvas, which is believable. Finall, when taking into account both the brush stroke and the canvas height, we achieve the best, and most convincing all the paintings. Note here that the specular highlights are exaggerated for illustrative purposes, and that an artist would not choose such settings. 8.2. The Dog A picture of a dog, kindl donated, was used to illustrate the effect of lighting the painting with different light sources. An animation of lights moving across the painting can be downloaded at http://www.dgp.toronto.edu/~gelkoura/proj/index.html. The dog picture is rich in detail and the stroke patterns produced (which are nicel visible in the height plane) are more complex than those produced b the butterfl picture. It is this complexit and fullness that made us realize quickl that capping the height after each was a bad idea. 9. Future Work While we found the results to be exciting and rather promising, the paintings produced still have a certain plastic feeling that is so common to computer graphics. Our gaud choice of specular highlight parameters certainl doesn t help. However what we seem to be missing mostl is the interaction of colors between each stroke. Using a constant color per stroke is unrealistic and produces a fake looking painting. Eas additions to the algorithm ma also improve the qualit of the output. For example, allowing the user to specif the roughness parameter of the strokes, and not just the specular, ma reduce that plastic feel. As far as the generation of the actual paint strokes, we did all our calculations in RGB space and we expect to get better results if we worked in HSV. We rel heavil on luminance, where we reall should rel on hue. This is visible in the dog painting, where the luminance difference between the dog s coat and the sidewalk are negligible, where the hue difference would have been substantial. 10. Acknowledgments I would like to thank Professor Kiriakos Kutulakos for introducing me to the intermingling of computer graphics and computer vision. I would also like to thank the Research and Development staff at Side Effects Software for all their wonderful support and amazing tools. 11. References [1] Aaron Hertzmann, Painterl Rering with Curved Brush Strokes of Multiple Sizes, In ACM SIGGRAPH 98 Conference Proceedings, Jul 1998, pp. 453-460. [2] Aaron Hertzmann and Ken Perlin, Painterl Rering for Video and Interaction, In First International Smposium on Non-Photorealistic Animation and Rering, June 2001. [3] Side Effects Software, Inc. Houdini 3D Animation Tools 5. [4] L. Piegl and W. Tiller, The NURBS Book, 2nd Edition, Springer-Verlag, Berlin, 1997. [5] Corel Corporation, Inc. Corel PHOTO-PAINT 10. [6] R. L. Cook, L. Carpenter and E. Catmull, The Rees Image Rering Architecture, In ACM SIGGRAPH 87 Conference Proceedings, Jul 1987, pp. 95-102.

[7] John-Peter Lewis, Texture Snthesis for Digital Painting, In ACM SIGGRAPH 84 Conference Proceedings, Jul 1984, pp. 245-252. [8] Turner Whitted, Anti-Aliased Line Drawing Using Brush Extrusion, In ACM SIGGRAPH 83 Conference Proceedings, Jul 1983, pp. 151-156. [9] Peter Litwinowicz. Processing Images and Video for an Impressionist Effect, In Non-Photorealistic Rering, SIGGRAPH Course Notes, 1999.

(a) Figure 5 The original butterfl picture and the painting produced without using an height information. (a) Figure 6 Displacement map generated using onl the height information for the brush strokes and the resulting rered painting.

(a) Figure 7 Displacement map and resulting image when onl canvas height information is used. (a) Figure 8 Displacement map and resulting image when both the canvas and the brush stroke height information is used.