Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Similar documents
The Light Field and Image-Based Rendering

Image-Based Rendering

Noah Snavely Steven M. Seitz. Richard Szeliski. University of Washington. Microsoft Research. Modified from authors slides

Photo Tourism: Exploring Photo Collections in 3D

Photo Tourism: Exploring Photo Collections in 3D

Photo Tourism: Exploring Photo Collections in 3D

Image Base Rendering: An Introduction

Image-Based Modeling and Rendering

Lecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Image-Based Modeling and Rendering

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Midterm Examination CS 534: Computational Photography

Algorithms for Image-Based Rendering with an Application to Driving Simulation

Image-Based Rendering using Image-Warping Motivation and Background

Image or Object? Is this real?

A million pixels, a million polygons. Which is heavier? François X. Sillion. imagis* Grenoble, France

Computational Photography

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Introduction to Computer Vision

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Computational Photography

Image-based modeling (IBM) and image-based rendering (IBR)

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

More and More on Light Fields. Last Lecture

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

Structure from Motion and Multi- view Geometry. Last lecture

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSc Topics in Computer Graphics 3D Photography

Intro to Ray-Tracing & Ray-Surface Acceleration

Volumetric Scene Reconstruction from Multiple Views

Geometric camera models and calibration

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image-Based Rendering and Light Fields

Image-Based Rendering. Image-Based Rendering

Perspective Projection and Texture Mapping

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

Lecture 6: Texture. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

03 RENDERING PART TWO

Linearizing the Plenoptic Space

Image Based Rendering

A million pixels, a million polygons: which is heavier?

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian

A Warping-based Refinement of Lumigraphs

More Single View Geometry

Dense 3D Reconstruction. Christiano Gava

5LSH0 Advanced Topics Video & Analysis

Image Transformations & Camera Calibration. Mašinska vizija, 2018.

Hidden surface removal. Computer Graphics

Capturing and View-Dependent Rendering of Billboard Models

Jingyi Yu CISC 849. Department of Computer and Information Science

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

More Single View Geometry

Introduction to 3D Concepts

Perspective Projection [2 pts]

Camera Model and Calibration

A Systems View of Large- Scale 3D Reconstruction

Volume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics

Multi-view stereo. Many slides adapted from S. Seitz

Chapter 4. Chapter 4. Computer Graphics 2006/2007 Chapter 4. Introduction to 3D 1

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments

Dense 3D Reconstruction. Christiano Gava

Scene Modeling for a Single View

Modeling Light. Slides from Alexei A. Efros and others

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Rasterization Overview

Course Recap + 3D Graphics on Mobile GPUs

Parallel Triangle Rendering on a Modern GPU

Shadow and Environment Maps

Homographies and RANSAC

Additional Material (electronic only)

o Basic signal processing o Filtering, resampling, warping,... Rendering o Polygon rendering pipeline o Ray tracing Modeling

12/3/2009. What is Computer Vision? Applications. Application: Assisted driving Pedestrian and car detection. Application: Improving online search

Computer Vision Projective Geometry and Calibration. Pinhole cameras

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Improving Initial Estimations for Structure from Motion Methods

CSE 167: Lecture 11: Textures 2. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection

Digitization of 3D Objects for Virtual Museum

Ray Tracing. Foley & Van Dam, Chapters 15 and 16

Ray Tracing Foley & Van Dam, Chapters 15 and 16

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline

High-Quality Interactive Lumigraph Rendering Through Warping

Computer Graphics Shadow Algorithms

Projective Geometry and Camera Models

3D Editing System for Captured Real Scenes

Image warping and stitching

Optimizing an Inverse Warper

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Automatic 2D-to-3D Video Conversion Techniques for 3DTV

Computer Graphics. Lecture 9 Environment mapping, Mirroring

DEPTH, STEREO AND FOCUS WITH LIGHT-FIELD CAMERAS

Scene Modeling for a Single View

Modeling Light. Michal Havlik

Shape as a Perturbation to Projective Mapping

Image warping and stitching

Scene Modeling for a Single View

Miniature faking. In close-up photo, the depth of field is limited.

Transcription:

Lecture 15: Image-Based Rendering and the Light Field Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011)

Demo (movie) Royal Palace: Madrid, Spain

Image-based rendering (IBR) So far in course: rendering = synthesizing image from a 3D model of the scene - Model: cameras, geometry, lights, materials Today: synthesizing novel views of a scene from images - Where do the images come from? - Previously synthesized by a renderer - Acquired from real world (photographs)

Why does this view look wrong?

Z Scene: (represented by image rendered from view 1) View 1 View 2 X = Image sample for view 2

Synthesizing novel scene views from an existing image If scene lies approximately on a plane, simple transform of the image from view 1 yields accurate image of scene from view 2 Recall: this is texture mapping Are there assumptions other than planarity of scene?

Non-planar scene Z Scene: (represented by image rendered from view 1) View 1 View 2 X = Image sample for view 2

Non-planar scene Z Scene: (represented by image + depth rendered from view 1) View 1 View 2 X = Image sample for view 2 Synthesis of novel scene views with correct perspective requires non-affine transformation of original image

Actifact: undersampled source image Z Scene: (represented by image + depth rendered from view 1) Undersampling: Image generated from view 1 has sparse sampling surface regions oriented at grazing angle to view 1. View 2 (surface region projected to one pixel in view 1, but several pixels in view 2) View 1 X

Artifact: disocclusion Z Disocclusion: surface region visible from view 2 was not visible from view 1 View 2 View 1 X

Disocclusion examples [Credit: Chaurasia et al. 2011] [Credit: Chen and Williams 93]

View interpolation Z View 3 View 2 View 1 X Combine results from closest pre-existing views Question: How to combine?

Sprites Original (complex) 3D Model (expensive to render) Selection of Views Novel view of object synthesized from rendering sprites Prerendered Textures

Microsoft Talisman [Torborg and Kajiya 96] GPU designed to accelerate image-based rendering for interactive graphics (Motivating idea: leverage frame-to-frame temporal locality) Implements image transform, compositing operations Implements traditional rendering operations (renders geometric models to images)

Microsoft Talisman [Torborg and Kajiya 96] Each object rendered separately into its own image layer (intent: high-quality renderings, not produced at real-time rates) Image layer compositor runs at real-time rates As scene changes (camera/object motion, etc.), image layer compositor transforms each layer accordingly, then composites image layers to produce complete frame System detects when image warp likely to be inaccurate, makes request to re-render layer

Image-based rendering in interactive graphics systems Promise: render complex scenes efficiently by manipulating images Reality: never has been able to sufficiently overcome artifacts to be a viable replacement for rendering from a detailed, 3D scene description - Not feasible to prerender images for all possible scene configurations and views - Decades of research on how to minimize artifacts from missing information (intersection of graphics and vision: understanding what s in an image helps fill in missing information... and vision is unsolved)

Good system design: efficiently meeting goals, subject to constraints New application goals: - Map the world - Navigate popular tourist destinations - Non-goal: virtual reality experience (artifact-free, real-time frame rate, viewer can navigate anywhere in the scene) Changing constraints: - Can t pre-render all scene configurations? - Ubiquity of cameras - Cloud-based graphics applications: enormous storage capacity - Bandwidth to access that server-side capacity from clients

Google Street View Goal: orient/familiarize myself with 16th and Valencia, San Francisco, CA Imagine complexity of modeling and rendering this scene (and then doing it for all of the Mission, for all of San Francisco, of California, of the world...)

Google Street View But imagine if your GPU produced images that had artifacts like this! Even worse, consider the transitions in Google Maps

Photo-tourism (now Microsoft Photosynth) [Snavely et al. 2006] Input: collection of photos of the same scene Output: sparse 3D representation of scene, 3D position of cameras for all photos Goal: get a sense of what it s like to be at Notre Dame Cathedral in Paris

Alternative projections [Image credit: Roman 2006] Each pixel column in image above is column of pixels from a different photograph Result is orthographic projection in X, perspective projection in Y

The Light Field [Levoy and Hanrahan 96] [Gortler et al., 96]

Light-field parameterization Light field is a 4D function (represents light in free space: no occlusion) Efficient two-plane parameterization Line described by connecting point on (u,v) plane with point on (s,t) plane If one of the planes placed at infinity: point + direction representation [Image credit: Levoy and Hanrahan 96] Levoy/Hanrahan refer to representation as a light slab : beam of light entering one quadrilateral and exiting another

Sampling of the light field U=1 S=1 U=0 S=0 Simplification: only showing lines in 2D

Line space representation Each line in Cartesian space** represented by a point in line space Cartesian space Line space ** Shown here in 2D, generalizes to 3D Cartesian lines [Image credit: Levoy and Hanrahan 96]

Sampling lines To be able to reproduce all possible views, light field should uniformly sample all possible lines Lines sampled by one slab Four slabs sample lines in all directions [Image credit: Levoy and Hanrahan 96]

Acquiring a light field Measuring light field by taking multiple photographs (In this example: each photograph: constant UV) [Image credit: Levoy and Hanrahan 96]

Light field storage layouts [Image credit: Levoy and Hanrahan 96]

Light field inside a camera Ray space plot Scene focal plane U Question: What does a pixel measure? Lens aperture: (U,V) X Pixel P1 Pixel P2 Sensor plane: (X,Y)

Light field inside a camera Ray space plot Scene focal plane U Lens aperture: (U,V) Pixel P1 Pixel P2 X Pixel P1 Pixel P2 Sensor plane: (X,Y)