The Light Field and Image-Based Rendering

Similar documents
Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Image-Based Modeling and Rendering

Image-Based Rendering

Image Base Rendering: An Introduction

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Noah Snavely Steven M. Seitz. Richard Szeliski. University of Washington. Microsoft Research. Modified from authors slides

Photo Tourism: Exploring Photo Collections in 3D

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Photo Tourism: Exploring Photo Collections in 3D

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image-Based Modeling and Rendering

Algorithms for Image-Based Rendering with an Application to Driving Simulation

Image-based modeling (IBM) and image-based rendering (IBR)

Photo Tourism: Exploring Photo Collections in 3D

A million pixels, a million polygons. Which is heavier? François X. Sillion. imagis* Grenoble, France

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

More and More on Light Fields. Last Lecture

Image-Based Rendering using Image-Warping Motivation and Background

Computational Photography

Midterm Examination CS 534: Computational Photography

lecture 18 - ray tracing - environment mapping - refraction

Jingyi Yu CISC 849. Department of Computer and Information Science

Image-Based Rendering and Light Fields

Shadow and Environment Maps

Image or Object? Is this real?

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

Lecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

03 RENDERING PART TWO

Computer Graphics Shadow Algorithms

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Image-Based Rendering. Image-Based Rendering

Computational Photography

The Rasterization Pipeline

Structure from Motion and Multi- view Geometry. Last lecture

Perspective Projection and Texture Mapping

Introduction to Computer Vision

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSc Topics in Computer Graphics 3D Photography

A million pixels, a million polygons: which is heavier?

Hidden surface removal. Computer Graphics

Additional Material (electronic only)

Introduction to 3D Concepts

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments

Capturing and View-Dependent Rendering of Billboard Models

A Warping-based Refinement of Lumigraphs

Geometric camera models and calibration

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Course Recap + 3D Graphics on Mobile GPUs

Lecture 6: Texture. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Image Based Rendering

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

3D Editing System for Captured Real Scenes

Shape as a Perturbation to Projective Mapping

High-Quality Interactive Lumigraph Rendering Through Warping

Linearizing the Plenoptic Space

Texture mapping. Computer Graphics CSE 167 Lecture 9

5LSH0 Advanced Topics Video & Analysis

Shadows in the graphics pipeline

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian

CS 464 Review. Review of Computer Graphics for Final Exam

Physically Realistic Ray Tracing

Rasterization Overview

Distribution Ray Tracing. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection

Announcements. Written Assignment 2 out (due March 8) Computer Graphics

Ray Tracing. CPSC 453 Fall 2018 Sonny Chan

Project report Augmented reality with ARToolKit

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Digitization of 3D Objects for Virtual Museum

High-quality Shadows with Improved Paraboloid Mapping

Announcements. Mosaics. How to do it? Image Mosaics

Multi-view stereo. Many slides adapted from S. Seitz

FAST ALGORITHM FOR CREATING IMAGE-BASED STEREO IMAGES

Volumetric Scene Reconstruction from Multiple Views

Computer Graphics. Lecture 9 Environment mapping, Mirroring

Parallel Triangle Rendering on a Modern GPU

Camera Model and Calibration

Pipeline Operations. CS 4620 Lecture 14

Topics and things to know about them:

Efficient Image-Based Methods for Rendering Soft Shadows. Hard vs. Soft Shadows. IBR good for soft shadows. Shadow maps

CS 354R: Computer Game Technology

CSE 167: Lecture 11: Textures 2. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

Image Processing: Motivation Rendering from Images. Related Work. Overview. Image Morphing Examples. Overview. View and Image Morphing CS334

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Attention to Detail! Creating Next Generation Content For Radeon X1800 and beyond

CS 498 VR. Lecture 18-4/4/18. go.illinois.edu/vrlect18

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Pipeline Operations. CS 4620 Lecture 10

Reading. 8. Distribution Ray Tracing. Required: Watt, sections 10.6,14.8. Further reading:

Image Warping and Mosacing

Computer Vision for Computer Graphics

Real-Time Universal Capture Facial Animation with GPU Skin Rendering

Shadows. COMP 575/770 Spring 2013

But First: Multi-View Projective Geometry

Transcription:

Lecture 11: The Light Field and Image-Based Rendering Visual Computing Systems

Demo (movie) Royal Palace: Madrid, Spain

Image-based rendering (IBR) So far in course: rendering = synthesizing an image from a 3D model of the scene - Model of scene includes: cameras, geometry, lights, materials Today: synthesizing novel views of a scene from existing images - Where do the input images come from? - Previously synthesized by a renderer - Acquired from the real world (photographs)

Why does this view look wrong?

What s going on? consider image resampling Z Scene: as represented by image rendered from view 1 (Keep in mind that although I ve drawn little squares the image is just a collection of samples) View 1 View 2 X = Required image sample for view 2

Synthesizing novel scene views from an existing image If scene lies approximately on a plane, affine transform of the image from view 1 yields accurate image of scene from view 2 This transform is just texture mapping Have I made assumptions about the scene in addition to the assumption of planar scene geometry? - Hint: think about surface reflectance

Non-planar scene Z Scene: (represented by image rendered from view 1) View 1 View 2 X = Image sample for view 2

Non-planar scene Z Scene: as represented by image + depth rendered from view 1 View 1 View 2 X = Image sample for view 2 Synthesis of novel scene views with correct perspective requires non-affine transformation of original image

Artifact: undersampled source image Z Scene: represented by image + depth rendered from view 1 View 1 X View 2 Undersampling: The image generated from view 1 contains a sparse sampling of surface regions oriented at a grazing angle to view 1. (the yellow surface region projects to one pixel in view 1, but several pixels in view 2)

Artifact: disocclusion Z Disocclusion: surface region visible from view 2 was not visible at all from view 1 View 2 View 1 X

Disocclusion examples [Credit: Chaurasia et al. 2011] [Credit: Chen and Williams 93]

View interpolation Z View 3 View 2 View 1 X Combine results from closest pre-existing views Question: How to combine?

Sprites Original (complex) 3D Model (expensive to render) Selection of Views Novel view of object synthesized from rendering sprites Prerendered Textures

Microsoft Talisman [Torborg and Kajiya 96] Proposed GPU designed to accelerate image-based rendering for interactive graphics (motivating idea: exploit frame-to-frame temporal locality by not rendering entire scene each frame) Implements image transform and image compositing operations Implements traditional rendering operations (renders geometric models to images)

Microsoft Talisman [Torborg and Kajiya 96] Each object is rendered separately into its own image layer (intent: high-quality renderings from 3D model, but not produced at real-time rates) Image layer compositor runs at real-time rates As scene changes (camera/object motion, etc.), image layer compositor transforms each layer accordingly, then composites image layers to produce complete frame System detects when image warp likely to be inaccurate, makes request to re-render layer

Image-based rendering in interactive graphics systems Promise: render complex scenes efficiently by manipulating preexisting images Reality: systems have always suffered from artifacts that have prevented them from being a viable replacement for rendering from a detailed, 3D scene description - Not feasible to prerender images for all possible scene configurations and views - Decades of research on how to minimize artifacts from missing information (intersection of graphics and vision: understanding what s in an image helps fill in missing information... and vision is unsolved)

Temporal reprojection anti-aliasing Common post-processing technique in many games - Goal: achieve higher quality shader results without increasing shading cost. Idea: combine rendered image pixels (or multi-sample buffer samples) from last frame with rendered output from current frame Requires application to store previous frame s color buffer To synthesize final frame at time t: - Render scene at t as usual to produce color_buffer t - For each pixel p, compute velocity vector (based on camera and/or object movement - Use velocity vector to compute screen location of corresponding surface fragment s position p t-1 in the previous frame ( reprojection of surface to previous time) - Sample from color_buffer t-1 at p t-1. Combine results with color_buffer t [p] to get final value for p in the current frame Result: better anti-aliasing of shading function: two shading samples per surface per pixel at a cost of only one shade per frame. Many heuristics to detect disocclusions, minimize errors in reproduction p t-1 p

Good system design: efficiently meeting goals, subject to constraints New graphics application goals: - Map the world - Navigate popular tourist destinations - Non-goal: virtual reality experience (artifact-free, real-time frame rate, viewer can navigate anywhere in the scene) Changing constraints: - Cannot pre-render all scene configurations? - Ubiquity of cameras - Cloud-based graphics applications: enormous storage capacity - Bandwidth now available to access server-side capacity from clients

Google Street View Goal: orient/familiarize myself with 16th and Valencia, San Francisco, CA Imagine complexity of modeling and rendering this scene (and then doing it for all of the Mission, for all of San Francisco, of California, of the world...)

Google Street View Imagine if your GPU produced images that had geometric artifacts like this! Imagine if moving through a 3D rendered environment in a game had transitions like Google Maps

Photo-tourism (now Microsoft Photosynth) [Snavely et al. 2006] Input: collection of photos of the same scene Output: sparse 3D representation of scene, 3D position of cameras for all photos Goal: get a sense of what it s like to be at Notre Dame Cathedral in Paris

Alternative projections [Image credit: Roman 2006] Each pixel column in image above is column of pixels from a different photograph Result is orthographic projection in X, perspective projection in Y

The Light Field [Levoy and Hanrahan 96] [Gortler et al., 96]

Light-field parameterization Light field is a 4D function (represents light in free space: no occlusion) Efficient two-plane parameterization Line described by connecting point on (u,v) plane with point on (s,t) plane If one of the planes placed at infinity: point + direction representation [Image credit: Levoy and Hanrahan 96] Levoy/Hanrahan refer to representation as a light slab : beam of light entering one quadrilateral and exiting another

Sampling of the light field U=1 S=1 U=0 S=0 Simplification: only showing lines in 2D (full light field is 4D function)

Line-space representation Each line in Cartesian space** represented by a point in line space Cartesian space Line space ** Shown here in 2D, generalizes to 3D Cartesian lines [Image credit: Levoy and Hanrahan 96]

Sampling lines To be able to reproduce all possible views, light field should uniformly sample all possible lines Lines sampled by one slab Four slabs sample lines in all directions [Image credit: Levoy and Hanrahan 96]

Acquiring a light field Measuring light field by taking multiple photographs (In this example: each photograph: constant UV) [Image credit: Levoy and Hanrahan 96]

Light field storage layouts [Image credit: Levoy and Hanrahan 96]

Light field inside a camera Ray space plot Scene focal plane U Question: what does a pixel measure? Lens aperture: (U,V) X Pixel P1 Pixel P2 Sensor plane: (X,Y)

Light field inside a camera Ray space plot Scene focal plane U Lens aperture: (U,V) Pixel P1 Pixel P2 X Pixel P1 Pixel P2 Sensor plane: (X,Y)

Readings M. Levoy and P. Hanrahan. Light Field Rendering. SIGGRAPH 1996