CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi
Point Based Representation Point sampling of Surface Mesh construction, or Mesh-less Often come from laser scanning Or even natural light How do you render How do you peform other processing Visibility, Collision etc. Related concepts: image based representation, particle systems
Laser Range Scanning Michaelangelo Project Laser scanners sample points on a 3D shape Often on a grid Large number of samples Most surfaces have a high frequency
Point-Based Rendering
Surfels [Pfister et al., SIGGRAPH 2000]
Sampling Known Objects 3D Rasterization Layered Depth Images Cast rays through the object along each axis Advance ray by regular increments Store pre-filtered texture colors at the points Project the surfel into texture space and filter to get a single color Disk radius >= maximum inter sample distance rk = r0 2 k
Filter shape (e.g., Gaussian) is usually circular/spherical We want that shape on screen => ellipse in space EWA Filter = low-pass filter warped reconstruction filter Elliptical Weighted Average Low-Pass Filter Volume
Filter shape (e.g., Gaussian) is usually circular/spherical We want that shape on screen => ellipse in space EWA Filter = low-pass filter warped reconstruction filter Elliptical Weighted Average Low-Pass Filter W Volume
Filter shape (e.g., Gaussian) is usually circular/spherical We want that shape on screen => ellipse in space EWA Filter = low-pass filter warped reconstruction filter Elliptical Weighted Average Projection Low-Pass Filter W Volume
Filter shape (e.g., Gaussian) is usually circular/spherical We want that shape on screen => ellipse in space EWA Filter = low-pass filter warped reconstruction filter Elliptical Weighted Average Projection Low-Pass Filter W Convolution Volume
Filter shape (e.g., Gaussian) is usually circular/spherical We want that shape on screen => ellipse in space EWA Filter = low-pass filter warped reconstruction filter Elliptical Weighted Average Projection Low-Pass Filter W Convolution Volume
Storing surfels 3 Layered depth images (LDI), one per axis LDI = Images with multiple <depth,color> per pixel Makes a Layered Depth Cube (LDC) Pixel spacing h0 related to expected screen resolution Octree with node = bxb block of pixels Bottom-up construction Make pixel spacing = h0 2 i Filter down each block a) b)
Projection View frustum culling Traverse blocks to find the right resolution One surfel per pixel, or n for supersampling i n max = Length of projected block diagonals/b If i n max > filter footprint, traverse children Warp blocks to screen space Fast incremental algorithms [GrossMan & Daly] Only a few operations per surfel fewer than matrix multiplication due to regularity of samples
Filling Use Z buffer Each surfel covers an area Project this area orthographically and scan convert Approximate ellipse with a bounding parallelogram Depth varies linearly: surfel normal May have holes (magnification or surfel orientation) Shading: Per-surfel Phong illumination Also incorporates environment/normal maps Reconstruct image by filling holes between projected surfels Apply a filter in screen space Super-sample to improve quality
Splatting and Reconstruction
QSplat Primary goal is interactive rendering of very large point-data sets Built for the Digital Michelangelo Project [Rusinkiewicz and Levoy, SIGGRAPH 2000]
Sphere Trees A hierarchy of spheres, with leaves containing single vertices Each sphere stores center, radius, normal, normal cone width, and color (optional) Tree built the same way one would build a KDtree Median cut method Rendered really large models for the day Focus on memory layout and data quantization
Rendering Sphere Trees Start at root Do a visibility test View frustum Also back-face cull based on normal cone Recurse or Draw Recurse based on projected area of sphere with an adapted threshold To draw, use normals for lighting and z-buffer to resolve occlusion
Splat Shape Several options Square (OpenGL point ) Circle (triangle fan or texture mapped square) Gaussian (have to do two-pass) Can squash splats depending on viewing angle Sometimes causes holes at silhouettes, can be fixed by bounding squash factor
Splat Shape
Splat Silhouettes
Few Splats
Many Splats
Surface Reconstruction: Cocone p Voronoi cell of p p + pole of p = point in the Voronoi cell farthest from p ε < 0.1 the vector from p to p + is within π/8 of the true surface normal The surface is nearly flat within the cell
Surface Reconstruction: Cocone p + p Voronoi cell of p p + pole of p = point in the Voronoi cell farthest from p ε < 0.1 the vector from p to p + is within π/8 of the true surface normal The surface is nearly flat within the cell
Surface Reconstruction: Cocone p + p Voronoi cell of p p + pole of p = point in the Voronoi cell farthest from p ε < 0.1 the vector from p to p + is within π/8 of the true surface normal The surface is nearly flat within the cell
Surface Reconstruction: Cocone p + p Voronoi cell of p p + pole of p = point in the Voronoi cell farthest from p ε < 0.1 the vector from p to p + is within π/8 of the true surface normal The surface is nearly flat within the cell
Surface Reconstruction: Cocone p + p Voronoi cell of p p + pole of p = point in the Voronoi cell farthest from p ε < 0.1 the vector from p to p + is within π/8 of the true surface normal The surface is nearly flat within the cell
Sample Reconstructed Surfaces courtesy Tamal Dey, OSU
Moving Least Squares For point set P, MLS(P) is a projection operator which maps each point to itself: MLS(P) = {x: ψ(p, x) = x} ψ(p, x): Find a local reference plane at x: p(u,v) Compute local polynomial approximation to surface on the plane: p(u,v)
MLS courtesy Luiz Velho
Reference domain The local plane: is computed so as to minimize a local weighted sum of squared distances of the points pi to the plane
Light Field Uniformly sample two planes plane of cameras Many plane-pairs needed Need compression Quantization
Light Field Uniformly sample two planes plane of cameras Many plane-pairs needed Need compression Quantization
Light Field Uniformly sample two planes plane of cameras Many plane-pairs needed Need compression Quantization
Light Field Uniformly sample two planes plane of cameras Many plane-pairs needed Need compression Quantization
Rendering For each desired ray: Compute intersection with (u,v) and (s,t) planes Take closest ray Filter from the closest
Compression Vector quantization Build a codebook of 4D tiles Each tile an index into the codebook Example: 2x2x2x2 tiles, 16 bit index = 24:1 compression