Bixels: Picture Samples With Embedded Sharp Boundaries

Similar documents
Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Shadows in the graphics pipeline

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Lecture 14: Computer Vision

Who has worked on a voxel engine before? Who wants to? My goal is to give the talk I wish I would have had before I started on our procedural engine.

CSE528 Computer Graphics: Theory, Algorithms, and Applications

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

Capturing, Modeling, Rendering 3D Structures

More Texture Mapping. Texture Mapping 1/46

Ptex: Per-face Texture Mapping for Production Rendering

Lightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute

EE795: Computer Vision and Intelligent Systems

3D Rasterization II COS 426

Motivation. Intensity Levels

Computer Graphics. Sampling Theory & Anti-Aliasing. Philipp Slusallek

Alexander Reshetov Intel Labs

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

COMP 175: Computer Graphics April 11, 2018

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Topics and things to know about them:

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Triangle Rasterization

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Pipeline Operations. CS 4620 Lecture 10

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Advanced Texture-Mapping Curves and Curved Surfaces. Pre-Lecture Business. Texture Modes. Texture Modes. Review quiz

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling

Real-Time Shadows. MIT EECS 6.837, Durand and Cutler

Lecture 6: Edge Detection

CS451Real-time Rendering Pipeline

Shadows. COMP 575/770 Spring 2013

Triangle meshes I. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2017

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

Perspective and vanishing points

Motion Estimation. There are three main types (or applications) of motion estimation:

Announcements. Written Assignment 2 out (due March 8) Computer Graphics

Triangle meshes I. CS 4620 Lecture 2

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Feature Tracking and Optical Flow

Chapter 7. Conclusions and Future Work

Representing Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Real-Time Shadows. Last Time? Schedule. Questions? Today. Why are Shadows Important?

Motivation. Gray Levels

Mosaics. Today s Readings

CS 431/636 Advanced Rendering Techniques

I have a meeting with Peter Lee and Bob Cosgrove on Wednesday to discuss the future of the cluster. Computer Graphics

CS1114: Study Guide 2

Building a Fast Ray Tracer

Volume Shadows Tutorial Nuclear / the Lab

Philipp Slusallek Karol Myszkowski. Realistic Image Synthesis SS18 Instant Global Illumination

Point based Rendering

Path Tracing part 2. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Feature Tracking and Optical Flow

Pipeline Operations. CS 4620 Lecture 14

A New Method in Shape Classification Using Stationary Transformed Wavelet Features and Invariant Moments

Graphics Hardware and Display Devices

Chapter 7 - Light, Materials, Appearance

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

4. Basic Mapping Techniques

Real-Time Shadows. Last Time? Textures can Alias. Schedule. Questions? Quiz 1: Tuesday October 26 th, in class (1 week from today!

Shadow Techniques. Sim Dietrich NVIDIA Corporation

Rasterization and Graphics Hardware. Not just about fancy 3D! Rendering/Rasterization. The simplest case: Points. When do we care?

Next-Generation Graphics on Larrabee. Tim Foley Intel Corp

Shading Techniques Denbigh Starkey

Interpolation using scanline algorithm

Last Time. Reading for Today: Graphics Pipeline. Clipping. Rasterization

Anno accademico 2006/2007. Davide Migliore

Modeling the Virtual World

Multi-view Stereo. Ivo Boyadzhiev CS7670: September 13, 2011

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows

Targil 10 : Why Mosaic? Why is this a challenge? Exposure differences Scene illumination Miss-registration Moving objects

Computer Graphics 1. Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2011

You can select polygons that use per-poly UVs by choosing the Select by Polymap command ( View > Selection > Maps > Select by Polygon Map).

TSBK03 Screen-Space Ambient Occlusion

The Traditional Graphics Pipeline

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

For Intuition about Scene Lighting. Today. Limitations of Planar Shadows. Cast Shadows on Planar Surfaces. Shadow/View Duality.

What have we leaned so far?

Understanding Gridfit

Volume Rendering. Lecture 21

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Sampling, Aliasing, & Mipmaps

Real-Time Shadows. Last Time? Today. Why are Shadows Important? Shadows as a Depth Cue. For Intuition about Scene Lighting

Stereo imaging ideal geometry

Autodesk Fusion 360: Render. Overview

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

Glass Gambit: Chess set and shader presets for DAZ Studio

Computational Photography

Sampling, Aliasing, & Mipmaps

Real-Time Shadows. Computer Graphics. MIT EECS Durand 1

Triangle meshes I. CS 4620 Lecture Kavita Bala (with previous instructor Marschner) Cornell CS4620 Fall 2015 Lecture 2

Last Time. Why are Shadows Important? Today. Graphics Pipeline. Clipping. Rasterization. Why are Shadows Important?

Irradiance Gradients. Media & Occlusions

Level of Details in Computer Rendering

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects

The Traditional Graphics Pipeline

Peripheral drift illusion

Planar Graphs and Surfaces. Graphs 2 1/58

Transcription:

Bixels: Picture Samples With Embedded Sharp Boundaries Bixels (bilinear) Pixels (bilinear) Jack Tumblin and Prasun Choudhury Northwestern University, Evanston IL, USA Today I want to ask a few provocative questions, namely: Why are pixels such a good idea? and Can we improve on pixels? How? Even if you don t really like my answers (they have some problems) I want to convince you that the way we represent digital images COULD be improved, and show you a simple attempted improvement that I call bixels : with subpixel accuracy; Bixels are useful because: --they describe true discontinuities in an image --they keep scene quantities separate See how the green leaf, purple flower, and yellow background values are kept entirely unmixed in the bixel image at left top, but pixel image mixes them together. --these discontinuities are machine-readable, and may make images easier to and align, NonPhotoRealistic (NPR) arguments original scene --Compare, search, warp, edit, --understand, using --link to the properties of the 1

A Bixel Image: Piecewise-Smooth Pixel Interpolation: ignore, smooth across boundaries Bixel Interpolation: stop at boundaries, keep gradients bpt bpt Center pixel value =1.0; surrounding pixel values = 0.0 The most important idea from this talk: An antialiased pixel image is usually presumed to be smooth and continuous everywhere, when reconstructed from its samples. Pixels ignore visually important boundaries, and smooth across them. But BIXELS do not: they form a piecewise-smooth image, and Bixel include a 2D graph of boundaries that are true discontinuities in both the intensities and the gradients of the image. This planar graph of discontinuities is stored with subpixel accuracy, and takes just 8 more bits per pixel to achieve the efx I will show you. - 2

But why do this? Aren t pixels the best way to store digital pictures? How can we store their visually meaningful contents in a machine-readable form? What sort of picture primitive would help computers to search, compare, and manipulate images for us? What, exactly, is the ideal contents of a digital picture? The more I think about how we store pictures digitally, the more perplexed I become. 3

What is the ideal digital picture? If your answer is: The display radiance, as if gathered by a lens --then you can ignore this talk; --pixels are a beautiful and complete solution. But if your answer matches mine: a container for visual experiences, but editable --you know the editing is awkward and indirect: --pixels are a nearly-irreversible encoding! If you think of digital images as a veridical record of the light entering a lens, then pixels are complete and offer little room for improvement. Optics and light transport are completely linear phenomena, and linear systems describe them beautifully. Linear filters, convolution, sampling and reconstruction, and other tools give us a complete closed algebra for images described by pixels. We truly don t need anything else. THUS definition of digital images. please ignore this talk; pixels are a beautiful, complete solution for this But this has always bothered me. I think a container for visual experience e.g. all the info needed to recreate, edit, and manipulate visually perceived quantities, then pixels are a nearly-irreversible encoding! We generally CAN T link the images backwards to the scene properties that caused them, and we can t link them forward to our perceptions of those scene quantities. perceived reflectance, illumination, shape, movement, and more. 4

Current Digital Pictures PHYSICAL 3D Scene light sources, BRDFs, shapes, positions, movements, Eyepoint position, movement, projection, Optics or Rendering Image I(x,y,?,t),t) Exposure Control, tone map Pixels Display RGB(x,y,t n ) PERCEIVED Vision Scene light sources, BRDFs, shapes, positions, movements, Eyepoint position, movement, projection, Humans see basic, partial information about boundaries, shape, occlusion, lighting, shadows and texture, with few discernible difficulties with high dynamic range, resolution, or noise, lighting, or exposure. This basic data is usually difficult or impossible to reliably extract from pixels. But why require extraction? Instead, we should encode this information as part of the image itself. Towards this goal, Bixels offer a straightforward way to represent intensity and gradient discontinuities within images with subpixel precision, at a fixed cost an additional 8 bits per pixel. 5

PHYSICAL 3D Scene light sources, BRDFs, shapes, positions, movements, Eyepoint position, movement, projection, Ideal Digital Pictures Optics or Rendering Image I(x,y,?,t),t) Exposure Control or tone map Display RGB(x,y,t n ) PERCEIVED Vision SOMETHING NEW (bixels: a very incomplete answer) Scene light sources, BRDFs, shapes, positions, movements, Eyepoint position, movement, projection, What we would like is something that more directly describes the visual experience, --something that, with some computing, would allow a computer-equipped display to construct a display image, one that, based on the viewing conditions, has the best chance of evoking the desired perceptions of the original scene. 6

A 2D Height Field of Sample Points Pixels: assume smooth between samples Discontinuities PERCEIVED, but not stored Bixels: make scissor cuts in a rubber sheet y y x x [An easy way to visualize the difference between bixels and pixels is to think of a pixel image as an intensity height field; Each pixel is a vertical post whose height is given by the pixel s value, and between these posts we usually presume the image is a smooth (limited bandwidth) surface, as if it were a stretchy rubber sheet glued to each post, and stretched like a drum-head between the posts. But bixels are different, they describe the visually important boundaries in the image as GENUINE DISCONTINUITIES in both intensity and gradients (such as object silhouettes, self-occlusions, shadows, the seams between different parts, etc.) --Suppose we could take a pair of scissors and cut the rubber sheet that is the height field: --(and it is a weird, magical kind of rubber material that doesn t shrink in x,y, but tries to stay as flat as possible) and --the rubber sheet relaxes along the boundaries we cut. THEN instead of stretching between unrelated sample values (e.g. red and white squares in a checkerboard) and blending them together to approximate the boundary, BIXELS keeps them separate thus --the red areas on one side of the boundary are purely red, and the white areas are purely white --ALSO, the rubber sheet relaxes to keep local gradients consistent on its side of the boundary Thus BIXELS CAN PRESERVE BOTH step-like discontinuities (as in this checkerboard) OR ridge-like discontinuities, without underestimating the sharpness of the boundaries. 7

Bixels: Discontinuous Scissor Cuts For well-defined intensities everywhere in image: Scissor cuts stay BETWEEN sample points 1 1 cut between adjacent sample pt, pairs y (all rubber stays attached to sample pts) Nyquist-like Complexity limit # of rubber pieces # of samples x 8

Bilinear Interpolation: Pixels Assumptions: Ignore Boundaries Intensity I = pixel values at sample points Gradient I = adjacent pixel differences Pixel Kernel Function k p (x,y) = (1 - x ) (1 - y ) y Extent: 2x2 tiles Tile == the square area between 4 sample points x The bilinear pixel kernel function shown here is the result of interpolating an image that is all zero everywhere EXCEPT for a single unity-valued pixel at the origin. If the red boundary line were separating samples of two unrelated quantities in the scene (such as the silhouette of the leaf against the background), then bilinear interpolation, mixing together the leaf and background colors between the samples. 9

Bilinear Interpolation: Bixels Assumptions: Never Cross the Boundaries Intensity I = pixel values at sample points Gradient I = adjacent pixel differences Bixel Kernel Function: k b (x,y) = boundary dependent! Easier: use tiles to evaluate y bpt bpt x But the kernel function for bixels is notably different-- it is strongly affected by the presence of a nearby boundary (marked in red). It does not mix together the sample values, but instead uses bilinear interpolation from nearby pixel values to preserve the gradient and intensities separately on either side of the boundary. 10

Tile and Boundary Definitions Tile: unit square with sample points at corner Sample points labeled ABCD (counter-clockwise) clockwise) Boundary point (bpt): a vertex in a planar graph of bounds Only 1 allowed per tile Position (x,y p p ) stored with lower-left left sample pt. (A) value Boundary Segment: a single link between 2 boundary pts Connects 2 adjacent bpts; 1 1 boundary segment allowed to cross each tile side. bpt Now for a more formal definition of the boundaries stored and the interpolation method used by bixels: 11

Simple Bixel Examples Sample Points, without boundaries (40X bounded bilinear interpolation) Add boundaries marked in red grid == tiles Now suppose we could take sample values at the corners of the grid shown here; --On the left, we bilinearly interpolated them; --On the right, we show them (expanded by the box filter, e.g. pixel replication) with red lines that mark the scissor cuts in the image; they show where to place boundaries 12

Simple Bixel Examples Sample Points, without boundaries (40X bounded bilinear interpolation) Same Sample Points, with boundaries: (40X bounded bilinear interpolation) And here is the effect of those scissor cuts; The resulting bixel image, even when enlarged 40X, preserves local gradients, and the boundary location affects intensity: note the bright, ridge-like vertical boundary at the top center of the gray rectangle. --note the sharp boundaries and smooth shading along the edge 13

Pixels.vs. Bixels 14 x 14 Pixels bilinear interp. 14x14 Bixels bilinear interp. Manually-placed bounds Another example of manual editing, where boundaries greatly add to the comprehensibility of the image. 14

Boundary-Limited Interpolation All boundary segments cut tiles into several separate regions Name the regions by the corners they include: D P BC bpt C D P D P C C D bpt P BCD C A P A B A P AB B A P A B S A bilinear patch function P(x,y) defines value of image at (x,y) in each named region. So how do we do this? How can we interpolate between sample points between, but not across boundaries, And still maintain local intensities and gradients? 15

Boundary-Limited Interpolation If no boundaries are present, patch function is just the bilinear basis, (ignores bpt) P ABCD ABCD (x,y) = A(1-x p )(1-y p ) + B( x p )(1-y p ) + C( x p )( y p ) + D(1-x p )( y p ) D A y p P ABCD C B x p 16

Boundary-Limited Interpolation If boundaries for a patch exclude a tile corner: D w D C* bpt A P ABD B Estimate missing corner value using forward differences from neighboring sample values (e.g. from D, B W S ) Each Pseudo-corner value such as C* is also a bilinear evaluation; patch function stays bilinear. B S 17

Boundary-Limited Interpolation If boundaries for a patch exclude a tile corner: D w bpt D C* bpt bpt If boundaries exclude neighbors too, just use what you have : estimate pseudo-corner from tile values: (e.g. find C* from plane through A,B,D, etc.) All cases are STILL bilinear functions (see paper) A bpt P ABD B B S bpt 18

Where do Boundaries come from? Computer Graphics Renderers Transform significant scene boundaries to image (as in Point-Edge Rendering--Bala et. al,, 2003) Somehow decide what is visually significant Novel Cameras + High Res + Edge-Finders Use lighting change to find depth discontinuities (as in NPR Camera Raskar et. al,, 2004) Sub-pixel Edge-Finding Schemes Scale-space search, model-based fitting, etc. Elder1999, etc. We have something of a circular problem with bixels how can we find the boundaries we want to put into the images themselves? We used 3 approaches in the results shown in the paper: 19

Results: boundary=depth discontinuity (Source data courtesy Ramesh Raskar, MERL) Source (1100x800) Boundaries (50x65) 20

Results: boundary=depth discontinuity (Source data courtesy Ramesh Raskar, MERL) Pixels (bilinear) 50x65 Bixels (bilinear) 50x65 21

Related Work Computer Vision / Edge description: Image Processing: Edgels etc. Resolution-Dependent NPR: Salisbury et. al 1996, (Edge+Blur)-Only Image Encode? Elder 1995,Elder200 Fast Ray-Tracing edge/point renderer Bala et. al 2003; Sharp, alias-free Shadow Maps Sen2003 Sharp, Antialiased Textures Sen2003 Bala2004 (EGSR) early recognition of representation problem machine-readable connected boundaries edge primitive with sharpness control, accurate scene bounds make accurate pixels accurate scene shadows make accurate pixels accurate scene textures make accurate pixels But all outputs are pixels (boundary-free) 22

Future Work Bixel Bixel Image Size Reduction Serious Problem it is NOT ROBUST! But? When is reduction ever necessary? Put boundaries only at the bottom of an image pyramid Need something smaller? Send only pixels Higher-Order Boundaries Cubic Splines: nice, but tricky; hardware-hostilehostile Hardware Implementation OpenGL texture can do it; use pseudo-corners shaderlanguage? Might be tricky 23

Thanks and Acknowledgements Thanks To: Kavita Bala, Bruce Walter, and Don Greenberg for source data & image boundary discussions Ramesh Raskar and Kar-Han Tan,, for source data, software, and source image processing Acknowledgements: Cornell Program of Computer Graphics Post-Doc, largely spent exploring bixels ideas (2000-2001) 2001) Umut Tekin (ugrad at Northwestern ) for many tests, extensive discussions, and experiments 24

(Near Los Angeles Convention Center) 25

End END 26