Bixels: Picture Samples With Embedded Sharp Boundaries
|
|
- Hilda Powell
- 6 years ago
- Views:
Transcription
1 Bixels: Picture Samples With Embedded Sharp Boundaries Bixels (bilinear) Pixels (bilinear) Jack Tumblin and Prasun Choudhury Northwestern University, Evanston IL, USA Today I want to ask a few provocative questions, namely: Why are pixels such a good idea? and Can we improve on pixels? How? Even if you don t really like my answers (they have some problems) I want to convince you that the way we represent digital images COULD be improved, and show you a simple attempted improvement that I call bixels : with subpixel accuracy; Bixels are useful because: --they describe true discontinuities in an image --they keep scene quantities separate See how the green leaf, purple flower, and yellow background values are kept entirely unmixed in the bixel image at left top, but pixel image mixes them together. --these discontinuities are machine-readable, and may make images easier to and align, NonPhotoRealistic (NPR) arguments original scene --Compare, search, warp, edit, --understand, using --link to the properties of the 1
2 A Bixel Image: Piecewise-Smooth Pixel Interpolation: ignore, smooth across boundaries Bixel Interpolation: stop at boundaries, keep gradients bpt bpt Center pixel value =1.0; surrounding pixel values = 0.0 The most important idea from this talk: An antialiased pixel image is usually presumed to be smooth and continuous everywhere, when reconstructed from its samples. Pixels ignore visually important boundaries, and smooth across them. But BIXELS do not: they form a piecewise-smooth image, and Bixel include a 2D graph of boundaries that are true discontinuities in both the intensities and the gradients of the image. This planar graph of discontinuities is stored with subpixel accuracy, and takes just 8 more bits per pixel to achieve the efx I will show you. - 2
3 But why do this? Aren t pixels the best way to store digital pictures? How can we store their visually meaningful contents in a machine-readable form? What sort of picture primitive would help computers to search, compare, and manipulate images for us? What, exactly, is the ideal contents of a digital picture? The more I think about how we store pictures digitally, the more perplexed I become. 3
4 What is the ideal digital picture? If your answer is: The display radiance, as if gathered by a lens --then you can ignore this talk; --pixels are a beautiful and complete solution. But if your answer matches mine: a container for visual experiences, but editable --you know the editing is awkward and indirect: --pixels are a nearly-irreversible encoding! If you think of digital images as a veridical record of the light entering a lens, then pixels are complete and offer little room for improvement. Optics and light transport are completely linear phenomena, and linear systems describe them beautifully. Linear filters, convolution, sampling and reconstruction, and other tools give us a complete closed algebra for images described by pixels. We truly don t need anything else. THUS definition of digital images. please ignore this talk; pixels are a beautiful, complete solution for this But this has always bothered me. I think a container for visual experience e.g. all the info needed to recreate, edit, and manipulate visually perceived quantities, then pixels are a nearly-irreversible encoding! We generally CAN T link the images backwards to the scene properties that caused them, and we can t link them forward to our perceptions of those scene quantities. perceived reflectance, illumination, shape, movement, and more. 4
5 Current Digital Pictures PHYSICAL 3D Scene light sources, BRDFs, shapes, positions, movements, Eyepoint position, movement, projection, Optics or Rendering Image I(x,y,?,t),t) Exposure Control, tone map Pixels Display RGB(x,y,t n ) PERCEIVED Vision Scene light sources, BRDFs, shapes, positions, movements, Eyepoint position, movement, projection, Humans see basic, partial information about boundaries, shape, occlusion, lighting, shadows and texture, with few discernible difficulties with high dynamic range, resolution, or noise, lighting, or exposure. This basic data is usually difficult or impossible to reliably extract from pixels. But why require extraction? Instead, we should encode this information as part of the image itself. Towards this goal, Bixels offer a straightforward way to represent intensity and gradient discontinuities within images with subpixel precision, at a fixed cost an additional 8 bits per pixel. 5
6 PHYSICAL 3D Scene light sources, BRDFs, shapes, positions, movements, Eyepoint position, movement, projection, Ideal Digital Pictures Optics or Rendering Image I(x,y,?,t),t) Exposure Control or tone map Display RGB(x,y,t n ) PERCEIVED Vision SOMETHING NEW (bixels: a very incomplete answer) Scene light sources, BRDFs, shapes, positions, movements, Eyepoint position, movement, projection, What we would like is something that more directly describes the visual experience, --something that, with some computing, would allow a computer-equipped display to construct a display image, one that, based on the viewing conditions, has the best chance of evoking the desired perceptions of the original scene. 6
7 A 2D Height Field of Sample Points Pixels: assume smooth between samples Discontinuities PERCEIVED, but not stored Bixels: make scissor cuts in a rubber sheet y y x x [An easy way to visualize the difference between bixels and pixels is to think of a pixel image as an intensity height field; Each pixel is a vertical post whose height is given by the pixel s value, and between these posts we usually presume the image is a smooth (limited bandwidth) surface, as if it were a stretchy rubber sheet glued to each post, and stretched like a drum-head between the posts. But bixels are different, they describe the visually important boundaries in the image as GENUINE DISCONTINUITIES in both intensity and gradients (such as object silhouettes, self-occlusions, shadows, the seams between different parts, etc.) --Suppose we could take a pair of scissors and cut the rubber sheet that is the height field: --(and it is a weird, magical kind of rubber material that doesn t shrink in x,y, but tries to stay as flat as possible) and --the rubber sheet relaxes along the boundaries we cut. THEN instead of stretching between unrelated sample values (e.g. red and white squares in a checkerboard) and blending them together to approximate the boundary, BIXELS keeps them separate thus --the red areas on one side of the boundary are purely red, and the white areas are purely white --ALSO, the rubber sheet relaxes to keep local gradients consistent on its side of the boundary Thus BIXELS CAN PRESERVE BOTH step-like discontinuities (as in this checkerboard) OR ridge-like discontinuities, without underestimating the sharpness of the boundaries. 7
8 Bixels: Discontinuous Scissor Cuts For well-defined intensities everywhere in image: Scissor cuts stay BETWEEN sample points 1 1 cut between adjacent sample pt, pairs y (all rubber stays attached to sample pts) Nyquist-like Complexity limit # of rubber pieces # of samples x 8
9 Bilinear Interpolation: Pixels Assumptions: Ignore Boundaries Intensity I = pixel values at sample points Gradient I = adjacent pixel differences Pixel Kernel Function k p (x,y) = (1 - x ) (1 - y ) y Extent: 2x2 tiles Tile == the square area between 4 sample points x The bilinear pixel kernel function shown here is the result of interpolating an image that is all zero everywhere EXCEPT for a single unity-valued pixel at the origin. If the red boundary line were separating samples of two unrelated quantities in the scene (such as the silhouette of the leaf against the background), then bilinear interpolation, mixing together the leaf and background colors between the samples. 9
10 Bilinear Interpolation: Bixels Assumptions: Never Cross the Boundaries Intensity I = pixel values at sample points Gradient I = adjacent pixel differences Bixel Kernel Function: k b (x,y) = boundary dependent! Easier: use tiles to evaluate y bpt bpt x But the kernel function for bixels is notably different-- it is strongly affected by the presence of a nearby boundary (marked in red). It does not mix together the sample values, but instead uses bilinear interpolation from nearby pixel values to preserve the gradient and intensities separately on either side of the boundary. 10
11 Tile and Boundary Definitions Tile: unit square with sample points at corner Sample points labeled ABCD (counter-clockwise) clockwise) Boundary point (bpt): a vertex in a planar graph of bounds Only 1 allowed per tile Position (x,y p p ) stored with lower-left left sample pt. (A) value Boundary Segment: a single link between 2 boundary pts Connects 2 adjacent bpts; 1 1 boundary segment allowed to cross each tile side. bpt Now for a more formal definition of the boundaries stored and the interpolation method used by bixels: 11
12 Simple Bixel Examples Sample Points, without boundaries (40X bounded bilinear interpolation) Add boundaries marked in red grid == tiles Now suppose we could take sample values at the corners of the grid shown here; --On the left, we bilinearly interpolated them; --On the right, we show them (expanded by the box filter, e.g. pixel replication) with red lines that mark the scissor cuts in the image; they show where to place boundaries 12
13 Simple Bixel Examples Sample Points, without boundaries (40X bounded bilinear interpolation) Same Sample Points, with boundaries: (40X bounded bilinear interpolation) And here is the effect of those scissor cuts; The resulting bixel image, even when enlarged 40X, preserves local gradients, and the boundary location affects intensity: note the bright, ridge-like vertical boundary at the top center of the gray rectangle. --note the sharp boundaries and smooth shading along the edge 13
14 Pixels.vs. Bixels 14 x 14 Pixels bilinear interp. 14x14 Bixels bilinear interp. Manually-placed bounds Another example of manual editing, where boundaries greatly add to the comprehensibility of the image. 14
15 Boundary-Limited Interpolation All boundary segments cut tiles into several separate regions Name the regions by the corners they include: D P BC bpt C D P D P C C D bpt P BCD C A P A B A P AB B A P A B S A bilinear patch function P(x,y) defines value of image at (x,y) in each named region. So how do we do this? How can we interpolate between sample points between, but not across boundaries, And still maintain local intensities and gradients? 15
16 Boundary-Limited Interpolation If no boundaries are present, patch function is just the bilinear basis, (ignores bpt) P ABCD ABCD (x,y) = A(1-x p )(1-y p ) + B( x p )(1-y p ) + C( x p )( y p ) + D(1-x p )( y p ) D A y p P ABCD C B x p 16
17 Boundary-Limited Interpolation If boundaries for a patch exclude a tile corner: D w D C* bpt A P ABD B Estimate missing corner value using forward differences from neighboring sample values (e.g. from D, B W S ) Each Pseudo-corner value such as C* is also a bilinear evaluation; patch function stays bilinear. B S 17
18 Boundary-Limited Interpolation If boundaries for a patch exclude a tile corner: D w bpt D C* bpt bpt If boundaries exclude neighbors too, just use what you have : estimate pseudo-corner from tile values: (e.g. find C* from plane through A,B,D, etc.) All cases are STILL bilinear functions (see paper) A bpt P ABD B B S bpt 18
19 Where do Boundaries come from? Computer Graphics Renderers Transform significant scene boundaries to image (as in Point-Edge Rendering--Bala et. al,, 2003) Somehow decide what is visually significant Novel Cameras + High Res + Edge-Finders Use lighting change to find depth discontinuities (as in NPR Camera Raskar et. al,, 2004) Sub-pixel Edge-Finding Schemes Scale-space search, model-based fitting, etc. Elder1999, etc. We have something of a circular problem with bixels how can we find the boundaries we want to put into the images themselves? We used 3 approaches in the results shown in the paper: 19
20 Results: boundary=depth discontinuity (Source data courtesy Ramesh Raskar, MERL) Source (1100x800) Boundaries (50x65) 20
21 Results: boundary=depth discontinuity (Source data courtesy Ramesh Raskar, MERL) Pixels (bilinear) 50x65 Bixels (bilinear) 50x65 21
22 Related Work Computer Vision / Edge description: Image Processing: Edgels etc. Resolution-Dependent NPR: Salisbury et. al 1996, (Edge+Blur)-Only Image Encode? Elder 1995,Elder200 Fast Ray-Tracing edge/point renderer Bala et. al 2003; Sharp, alias-free Shadow Maps Sen2003 Sharp, Antialiased Textures Sen2003 Bala2004 (EGSR) early recognition of representation problem machine-readable connected boundaries edge primitive with sharpness control, accurate scene bounds make accurate pixels accurate scene shadows make accurate pixels accurate scene textures make accurate pixels But all outputs are pixels (boundary-free) 22
23 Future Work Bixel Bixel Image Size Reduction Serious Problem it is NOT ROBUST! But? When is reduction ever necessary? Put boundaries only at the bottom of an image pyramid Need something smaller? Send only pixels Higher-Order Boundaries Cubic Splines: nice, but tricky; hardware-hostilehostile Hardware Implementation OpenGL texture can do it; use pseudo-corners shaderlanguage? Might be tricky 23
24 Thanks and Acknowledgements Thanks To: Kavita Bala, Bruce Walter, and Don Greenberg for source data & image boundary discussions Ramesh Raskar and Kar-Han Tan,, for source data, software, and source image processing Acknowledgements: Cornell Program of Computer Graphics Post-Doc, largely spent exploring bixels ideas ( ) 2001) Umut Tekin (ugrad at Northwestern ) for many tests, extensive discussions, and experiments 24
25 (Near Los Angeles Convention Center) 25
26 End END 26
Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February
Soft shadows Steve Marschner Cornell University CS 569 Spring 2008, 21 February Soft shadows are what we normally see in the real world. If you are near a bare halogen bulb, a stage spotlight, or other
More informationShadows in the graphics pipeline
Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects
More informationLast update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1
Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus
More informationLecture 14: Computer Vision
CS/b: Artificial Intelligence II Prof. Olga Veksler Lecture : Computer Vision D shape from Images Stereo Reconstruction Many Slides are from Steve Seitz (UW), S. Narasimhan Outline Cues for D shape perception
More informationWho has worked on a voxel engine before? Who wants to? My goal is to give the talk I wish I would have had before I started on our procedural engine.
1 Who has worked on a voxel engine before? Who wants to? My goal is to give the talk I wish I would have had before I started on our procedural engine. Three parts to this talk. A lot of content, so I
More informationCSE528 Computer Graphics: Theory, Algorithms, and Applications
CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334
More informationEECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines
EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationMore Texture Mapping. Texture Mapping 1/46
More Texture Mapping Texture Mapping 1/46 Perturbing Normals Texture Mapping 2/46 Perturbing Normals Instead of fetching a texture for color, fetch a new perturbed normal vector Creates the appearance
More informationPtex: Per-face Texture Mapping for Production Rendering
EGSR 2008 Ptex: Per-face Texture Mapping for Production Rendering Brent Burley and Dylan Lacewell Walt Disney Animation Studios (See attached slide notes for details) Texture Mapping at Disney Chicken
More informationLightcuts. Jeff Hui. Advanced Computer Graphics Rensselaer Polytechnic Institute
Lightcuts Jeff Hui Advanced Computer Graphics 2010 Rensselaer Polytechnic Institute Fig 1. Lightcuts version on the left and naïve ray tracer on the right. The lightcuts took 433,580,000 clock ticks and
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More information3D Rasterization II COS 426
3D Rasterization II COS 426 3D Rendering Pipeline (for direct illumination) 3D Primitives Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation
More informationMotivation. Intensity Levels
Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding
More informationComputer Graphics. Sampling Theory & Anti-Aliasing. Philipp Slusallek
Computer Graphics Sampling Theory & Anti-Aliasing Philipp Slusallek Dirac Comb (1) Constant & δ-function flash Comb/Shah function 2 Dirac Comb (2) Constant & δ-function Duality f(x) = K F(ω) = K (ω) And
More informationAlexander Reshetov Intel Labs
Alexander Reshetov Intel Labs 2 August 2009 Prior Art Problems, solutions, ideas Morphological Antialiasing image-based AA Input: image. Output: better looking image Algorithm, features, limitations Demos
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationCOMP 175: Computer Graphics April 11, 2018
Lecture n+1: Recursive Ray Tracer2: Advanced Techniques and Data Structures COMP 175: Computer Graphics April 11, 2018 1/49 Review } Ray Intersect (Assignment 4): questions / comments? } Review of Recursive
More informationRendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane
Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world
More informationTopics and things to know about them:
Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More informationTriangle Rasterization
Triangle Rasterization Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 2/07/07 1 From last time Lines and planes Culling View frustum culling Back-face culling Occlusion culling
More informationMotion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi
Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion
More informationPipeline Operations. CS 4620 Lecture 10
Pipeline Operations CS 4620 Lecture 10 2008 Steve Marschner 1 Hidden surface elimination Goal is to figure out which color to make the pixels based on what s in front of what. Hidden surface elimination
More informationMatching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.
Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.
More informationAdvanced Texture-Mapping Curves and Curved Surfaces. Pre-Lecture Business. Texture Modes. Texture Modes. Review quiz
Advanced Texture-Mapping Curves and Curved Surfaces Pre-ecture Business loadtexture example midterm handed bac, code posted (still) get going on pp3! more on texturing review quiz CS148: Intro to CG Instructor:
More informationComputer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling
Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Welcome to the lectures on computer graphics. We have
More informationReal-Time Shadows. MIT EECS 6.837, Durand and Cutler
Real-Time Shadows Last Time? The graphics pipeline Clipping & rasterization of polygons Visibility the depth buffer (z-buffer) Schedule Quiz 2: Thursday November 20 th, in class (two weeks from Thursday)
More informationLecture 6: Edge Detection
#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform
More informationCS451Real-time Rendering Pipeline
1 CS451Real-time Rendering Pipeline JYH-MING LIEN DEPARTMENT OF COMPUTER SCIENCE GEORGE MASON UNIVERSITY Based on Tomas Akenine-Möller s lecture note You say that you render a 3D 2 scene, but what does
More informationShadows. COMP 575/770 Spring 2013
Shadows COMP 575/770 Spring 2013 Shadows in Ray Tracing Shadows are important for realism Basic idea: figure out whether a point on an object is illuminated by a light source Easy for ray tracers Just
More informationTriangle meshes I. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2017
Triangle meshes I CS 4620 Lecture 2 2017 Steve Marschner 1 spheres Andrzej Barabasz approximate sphere Rineau & Yvinec CGAL manual 2017 Steve Marschner 2 finite element analysis PATRIOT Engineering 2017
More informationconvolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection
COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:
More informationPerspective and vanishing points
Last lecture when I discussed defocus blur and disparities, I said very little about neural computation. Instead I discussed how blur and disparity are related to each other and to depth in particular,
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationAnnouncements. Written Assignment 2 out (due March 8) Computer Graphics
Announcements Written Assignment 2 out (due March 8) 1 Advanced Ray Tracing (Recursive) Ray Tracing Antialiasing Motion Blur Distribution Ray Tracing Ray Tracing and Radiosity Assumptions Simple shading
More informationTriangle meshes I. CS 4620 Lecture 2
Triangle meshes I CS 4620 Lecture 2 2014 Steve Marschner 1 spheres Andrzej Barabasz approximate sphere Rineau & Yvinec CGAL manual 2014 Steve Marschner 2 finite element analysis PATRIOT Engineering 2014
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationChapter 7. Conclusions and Future Work
Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between
More informationRepresenting Moving Images with Layers. J. Y. Wang and E. H. Adelson MIT Media Lab
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab Goal Represent moving images with sets of overlapping layers Layers are ordered in depth and occlude each other Velocity
More informationPipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11
Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION
More informationReal-Time Shadows. Last Time? Schedule. Questions? Today. Why are Shadows Important?
Last Time? Real-Time Shadows The graphics pipeline Clipping & rasterization of polygons Visibility the depth buffer (z-buffer) Schedule Questions? Quiz 2: Thursday November 2 th, in class (two weeks from
More informationMotivation. Gray Levels
Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding
More informationMosaics. Today s Readings
Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections
More informationCS 431/636 Advanced Rendering Techniques
CS 431/636 Advanced Rendering Techniques Dr. David Breen Matheson 308 Thursday 6PM 8:50PM Presentation 7 5/23/06 Questions from Last Time? Hall Shading Model Shadows Reflections Refractions Slide Credits
More informationI have a meeting with Peter Lee and Bob Cosgrove on Wednesday to discuss the future of the cluster. Computer Graphics
Announcements Assignment 4 will be out later today Problem Set 3 is due today or tomorrow by 9am in my mail box (4 th floor NSH) How are the machines working out? I have a meeting with Peter Lee and Bob
More informationCS1114: Study Guide 2
CS4: Study Guide 2 This document covers the topics we ve covered in the second part of the course. Please refer to the class slides for more details. Polygons and convex hulls A polygon is a set of 2D
More informationBuilding a Fast Ray Tracer
Abstract Ray tracing is often used in renderers, as it can create very high quality images at the expense of run time. It is useful because of its ability to solve many different problems in image rendering.
More informationVolume Shadows Tutorial Nuclear / the Lab
Volume Shadows Tutorial Nuclear / the Lab Introduction As you probably know the most popular rendering technique, when speed is more important than quality (i.e. realtime rendering), is polygon rasterization.
More informationPhilipp Slusallek Karol Myszkowski. Realistic Image Synthesis SS18 Instant Global Illumination
Realistic Image Synthesis - Instant Global Illumination - Karol Myszkowski Overview of MC GI methods General idea Generate samples from lights and camera Connect them and transport illumination along paths
More informationPoint based Rendering
Point based Rendering CS535 Daniel Aliaga Current Standards Traditionally, graphics has worked with triangles as the rendering primitive Triangles are really just the lowest common denominator for surfaces
More informationPath Tracing part 2. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017
Path Tracing part 2 Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Monte Carlo Integration Monte Carlo Integration The rendering (& radiance) equation is an infinitely recursive integral
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationPipeline Operations. CS 4620 Lecture 14
Pipeline Operations CS 4620 Lecture 14 2014 Steve Marschner 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives
More informationA New Method in Shape Classification Using Stationary Transformed Wavelet Features and Invariant Moments
Original Article A New Method in Shape Classification Using Stationary Transformed Wavelet Features and Invariant Moments Arash Kalami * Department of Electrical Engineering, Urmia Branch, Islamic Azad
More informationGraphics Hardware and Display Devices
Graphics Hardware and Display Devices CSE328 Lectures Graphics/Visualization Hardware Many graphics/visualization algorithms can be implemented efficiently and inexpensively in hardware Facilitates interactive
More informationChapter 7 - Light, Materials, Appearance
Chapter 7 - Light, Materials, Appearance Types of light in nature and in CG Shadows Using lights in CG Illumination models Textures and maps Procedural surface descriptions Literature: E. Angel/D. Shreiner,
More informationBSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy
BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving
More information4. Basic Mapping Techniques
4. Basic Mapping Techniques Mapping from (filtered) data to renderable representation Most important part of visualization Possible visual representations: Position Size Orientation Shape Brightness Color
More informationReal-Time Shadows. Last Time? Textures can Alias. Schedule. Questions? Quiz 1: Tuesday October 26 th, in class (1 week from today!
Last Time? Real-Time Shadows Perspective-Correct Interpolation Texture Coordinates Procedural Solid Textures Other Mapping Bump Displacement Environment Lighting Textures can Alias Aliasing is the under-sampling
More informationShadow Techniques. Sim Dietrich NVIDIA Corporation
Shadow Techniques Sim Dietrich NVIDIA Corporation sim.dietrich@nvidia.com Lighting & Shadows The shadowing solution you choose can greatly influence the engine decisions you make This talk will outline
More informationRasterization and Graphics Hardware. Not just about fancy 3D! Rendering/Rasterization. The simplest case: Points. When do we care?
Where does a picture come from? Rasterization and Graphics Hardware CS559 Course Notes Not for Projection November 2007, Mike Gleicher Result: image (raster) Input 2D/3D model of the world Rendering term
More informationNext-Generation Graphics on Larrabee. Tim Foley Intel Corp
Next-Generation Graphics on Larrabee Tim Foley Intel Corp Motivation The killer app for GPGPU is graphics We ve seen Abstract models for parallel programming How those models map efficiently to Larrabee
More informationShading Techniques Denbigh Starkey
Shading Techniques Denbigh Starkey 1. Summary of shading techniques 2 2. Lambert (flat) shading 3 3. Smooth shading and vertex normals 4 4. Gouraud shading 6 5. Phong shading 8 6. Why do Gouraud and Phong
More informationInterpolation using scanline algorithm
Interpolation using scanline algorithm Idea: Exploit knowledge about already computed color values. Traverse projected triangle top-down using scanline. Compute start and end color value of each pixel
More informationLast Time. Reading for Today: Graphics Pipeline. Clipping. Rasterization
Last Time Modeling Transformations Illumination (Shading) Real-Time Shadows Viewing Transformation (Perspective / Orthographic) Clipping Projection (to Screen Space) Scan Conversion (Rasterization) Visibility
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationModeling the Virtual World
Modeling the Virtual World Joaquim Madeira November, 2013 RVA - 2013/2014 1 A VR system architecture Modeling the Virtual World Geometry Physics Haptics VR Toolkits RVA - 2013/2014 2 VR object modeling
More informationMulti-view Stereo. Ivo Boyadzhiev CS7670: September 13, 2011
Multi-view Stereo Ivo Boyadzhiev CS7670: September 13, 2011 What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape
More informationENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows
ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 29 January 2009 Figure 1: 3D Photography
More informationTargil 10 : Why Mosaic? Why is this a challenge? Exposure differences Scene illumination Miss-registration Moving objects
Why Mosaic? Are you getting the whole picture? Compact Camera FOV = 5 x 35 Targil : Panoramas - Stitching and Blending Some slides from Alexei Efros 2 Slide from Brown & Lowe Why Mosaic? Are you getting
More informationComputer Graphics 1. Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2011
Computer Graphics 1 Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling 1 The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in
More informationYou can select polygons that use per-poly UVs by choosing the Select by Polymap command ( View > Selection > Maps > Select by Polygon Map).
UV Texture What is UV Mapping? Sometimes, when mapping textures onto objects, you will find that the normal projection mapping just doesn t work. This usually happens when the object is organic, or irregular
More informationTSBK03 Screen-Space Ambient Occlusion
TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm
More informationThe Traditional Graphics Pipeline
Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray
More informationThis work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you
This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional
More informationFor Intuition about Scene Lighting. Today. Limitations of Planar Shadows. Cast Shadows on Planar Surfaces. Shadow/View Duality.
Last Time Modeling Transformations Illumination (Shading) Real-Time Shadows Viewing Transformation (Perspective / Orthographic) Clipping Projection (to Screen Space) Graphics Pipeline Clipping Rasterization
More informationWhat have we leaned so far?
What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic
More informationUnderstanding Gridfit
Understanding Gridfit John R. D Errico Email: woodchips@rochester.rr.com December 28, 2006 1 Introduction GRIDFIT is a surface modeling tool, fitting a surface of the form z(x, y) to scattered (or regular)
More informationVolume Rendering. Lecture 21
Volume Rendering Lecture 21 Acknowledgements These slides are collected from many sources. A particularly valuable source is the IEEE Visualization conference tutorials. Sources from: Roger Crawfis, Klaus
More information2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into
2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel
More informationLets assume each object has a defined colour. Hence our illumination model is looks unrealistic.
Shading Models There are two main types of rendering that we cover, polygon rendering ray tracing Polygon rendering is used to apply illumination models to polygons, whereas ray tracing applies to arbitrary
More informationSampling, Aliasing, & Mipmaps
Sampling, Aliasing, & Mipmaps Last Time? Monte-Carlo Integration Importance Sampling Ray Tracing vs. Path Tracing source hemisphere What is a Pixel? Sampling & Reconstruction Filters in Computer Graphics
More informationReal-Time Shadows. Last Time? Today. Why are Shadows Important? Shadows as a Depth Cue. For Intuition about Scene Lighting
Last Time? Real-Time Shadows Today Why are Shadows Important? Shadows & Soft Shadows in Ray Tracing Planar Shadows Projective Texture Shadows Shadow Maps Shadow Volumes Why are Shadows Important? Depth
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More informationAutodesk Fusion 360: Render. Overview
Overview Rendering is the process of generating an image by combining geometry, camera, texture, lighting and shading (also called materials) information using a computer program. Before an image can be
More informationS U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T
S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render
More informationGlass Gambit: Chess set and shader presets for DAZ Studio
Glass Gambit: Chess set and shader presets for DAZ Studio This product includes a beautiful glass chess set, 70 faceted glass shader presets and a 360 degree prop with 5 material files. Some people find
More informationComputational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2010 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application Introduction Pinhole camera
More informationSampling, Aliasing, & Mipmaps
Sampling, Aliasing, & Mipmaps Last Time? Monte-Carlo Integration Importance Sampling Ray Tracing vs. Path Tracing source hemisphere Sampling sensitive to choice of samples less sensitive to choice of samples
More informationReal-Time Shadows. Computer Graphics. MIT EECS Durand 1
Real-Time Shadows Computer Graphics MIT EECS 6.837 Durand 1 Why are Shadows Important? Depth cue Scene Lighting Realism Contact points 2 Shadows as a Depth Cue source unknown. All rights reserved. This
More informationTriangle meshes I. CS 4620 Lecture Kavita Bala (with previous instructor Marschner) Cornell CS4620 Fall 2015 Lecture 2
Triangle meshes I CS 4620 Lecture 2 1 Shape http://fc00.deviantart.net/fs70/f/2014/220/5/3/audi_r8_render_by_smiska333-d7u9pjt.jpg spheres Andrzej Barabasz approximate sphere Rineau & Yvinec CGAL manual
More informationLast Time. Why are Shadows Important? Today. Graphics Pipeline. Clipping. Rasterization. Why are Shadows Important?
Last Time Modeling Transformations Illumination (Shading) Real-Time Shadows Viewing Transformation (Perspective / Orthographic) Clipping Projection (to Screen Space) Graphics Pipeline Clipping Rasterization
More informationIrradiance Gradients. Media & Occlusions
Irradiance Gradients in the Presence of Media & Occlusions Wojciech Jarosz in collaboration with Matthias Zwicker and Henrik Wann Jensen University of California, San Diego June 23, 2008 Wojciech Jarosz
More informationLevel of Details in Computer Rendering
Level of Details in Computer Rendering Ariel Shamir Overview 1. Photo realism vs. Non photo realism (NPR) 2. Objects representations 3. Level of details Photo Realism Vs. Non Pixar Demonstrations Sketching,
More informationRendering: Reality. Eye acts as pinhole camera. Photons from light hit objects
Basic Ray Tracing Rendering: Reality Eye acts as pinhole camera Photons from light hit objects Rendering: Reality Eye acts as pinhole camera Photons from light hit objects Rendering: Reality Eye acts as
More informationThe Traditional Graphics Pipeline
Final Projects Proposals due Thursday 4/8 Proposed project summary At least 3 related papers (read & summarized) Description of series of test cases Timeline & initial task assignment The Traditional Graphics
More informationPeripheral drift illusion
Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video
More informationPlanar Graphs and Surfaces. Graphs 2 1/58
Planar Graphs and Surfaces Graphs 2 1/58 Last time we discussed the Four Color Theorem, which says that any map can be colored with at most 4 colors and not have two regions that share a border having
More information