More and More on Light Fields. Last Lecture

Similar documents
Structure from Motion and Multi- view Geometry. Last lecture

Mosaics, Plenoptic Function, and Light Field Rendering. Last Lecture

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image-Based Rendering

Jingyi Yu CISC 849. Department of Computer and Information Science

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Image-based modeling (IBM) and image-based rendering (IBR)

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

Image or Object? Is this real?

Capturing and View-Dependent Rendering of Billboard Models

Computational Photography

Efficient Free Form Light Field Rendering

The Light Field and Image-Based Rendering

Modeling Light. Slides from Alexei A. Efros and others

Image-Based Modeling and Rendering

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Modeling Light. Michal Havlik

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Point Light Field for Point Rendering Systems

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN

Image-Based Rendering and Light Fields

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection

A Warping-based Refinement of Lumigraphs

Unstructured Lumigraph Rendering

Modeling Light. On Simulating the Visual Experience

Image Base Rendering: An Introduction

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

The Plenoptic videos: Capturing, Rendering and Compression. Chan, SC; Ng, KT; Gan, ZF; Chan, KL; Shum, HY.

Light Field Spring

Image Based Rendering. D.A. Forsyth, with slides from John Hart

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Image Based Rendering

Modeling Light. Michal Havlik

Active Scene Capturing for Image-Based Rendering with a Light Field Setup

CS 684 Fall 2005 Image-based Modeling and Rendering. Ruigang Yang

Focal stacks and lightfields

Computational Photography

Image-Based Modeling and Rendering

Reflectance & Lighting

The Light Field. Last lecture: Radiometry and photometry

An object-based approach to plenoptic videos. Proceedings - Ieee International Symposium On Circuits And Systems, 2005, p.

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

High-Quality Interactive Lumigraph Rendering Through Warping

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Optimization of the number of rays in interpolation for light field based free viewpoint systems

Volumetric Scene Reconstruction from Multiple Views

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

Image-based rendering using plane-sweeping modelisation

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments

CS : Assignment 2 Real-Time / Image-Based Rendering

Computational Photography

CSc Topics in Computer Graphics 3D Photography

View Synthesis for Multiview Video Compression

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field

The 7d plenoptic function, indexing all light.

Polyhedral Visual Hulls for Real-Time Rendering

Modeling, Combining, and Rendering Dynamic Real-World Events From Image Sequences

A unified approach for motion analysis and view synthesis Λ

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Light Field Techniques for Reflections and Refractions

Overview of Active Vision Techniques

A Generalized Light-Field API and Management System

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005

3D Modeling using multiple images Exam January 2008

CS4670: Computer Vision

CS 283: Assignment 3 Real-Time / Image-Based Rendering

Image-based Rendering with Controllable Illumination

Multi-view stereo. Many slides adapted from S. Seitz

Scam Light Field Rendering

Acquisition and Visualization of Colored 3D Objects

View Synthesis for Multiview Video Compression

Rendering by Manifold Hopping

CSE528 Computer Graphics: Theory, Algorithms, and Applications

Point-Based Impostors for Real-Time Visualization

A million pixels, a million polygons. Which is heavier? François X. Sillion. imagis* Grenoble, France

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Re-live the Movie Matrix : From Harry Nyquist to Image-Based Rendering. Tsuhan Chen Carnegie Mellon University Pittsburgh, USA

Compression of Lumigraph with Multiple Reference Frame (MRF) Prediction and Just-in-time Rendering

CSCI 1290: Comp Photo

A 3D Pattern for Post Estimation for Object Capture

A FREE-VIEWPOINT VIDEO SYSTEM FOR VISUALISATION OF SPORT SCENES

Sashi Kumar Penta COMP Final Project Report Department of Computer Science, UNC at Chapel Hill 13 Dec, 2006

CSE 163: Assignment 3 Final Project

Light Field = Radiance(Ray)

Announcements. Light. Properties of light. Light. Project status reports on Wednesday. Readings. Today. Readings Szeliski, 2.2, 2.3.

An Efficient Visual Hull Computation Algorithm

Projective Texture Mapping with Full Panorama

Real-time Shading: Sampling Procedural Shaders

Camera Model and Calibration

Image-Based Rendering using Image-Warping Motivation and Background

Optimizing an Inverse Warper

Adding a Transparent Object on Image

Texture-Mapping Tricks. How Bad Does it Look? We've Seen this Sort of Thing Before. Sampling Texture Maps

Light Field Appearance Manifolds

Efficient View-Dependent Sampling of Visual Hulls

Radiance. Radiance properties. Radiance properties. Computer Graphics (Fall 2008)

Multiple View Geometry

Real-Time Video- Based Modeling and Rendering of 3D Scenes

Detail Synthesis for Image-based Texturing

Transcription:

More and More on Light Fields Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 4 Last Lecture Re-review with emphasis on radiometry Mosaics & Quicktime VR The Plenoptic function The main idea of the lumigraph/light field rendering. 1

Announcements Mailing list: cse291-j@cs.ucsd.edu If you re not on it, and want to be added (e.g. auditing, and not on course list, send the email msg to majordomo@cs.ucsd.edu with body saying: subscribe cse291-j my_email@something.something Class presentations: requests from 1/23 Diem Vu 1/28 Satya Malick 1/30 Jin-Su Kim 2/4 Sameer Agarwal 2/18 Jongwoo Lim 2/20 Yang Wu 2/25 Peter Schwer 3/4 Cindy Xin Wang This lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH, pp 43--54, 1996 M. Levoy, P. Hanrahan, Light Field Rendering, SIGGRAPH, 1996 D. Wood, D. Azuma, W. Aldinger, B. Curless, T. Duchamp, D. Salesin, and W. Steutzle. Surface light fields for 3D photography, SIGGRAPH, 2000. Aaron Isaksen, Leonard McMillan, Steven J. Gortler, Dynamically reparameterized light fields, SIGGRAPH 2000, pp 297-306 2

Radiance properties In free space, radiance is constant as it propagates along a ray Derived from conservation of flux Fundamental in Light Transport. dφ = Ldω da = Ldω da = dφ 1 1 1 1 2 2 2 2 dω = da r dω = da r 2 2 1 2 2 1 dada dω da = = dω da r 1 2 1 1 2 2 2 L = L 1 2 Light Field/Lumigraph Main Idea In free space, the 5-D plenoptic function can be reduced to a 4-D function (radiances) on the space of light rays. Camera images measure the radiance over a 2-D set a 2-D subset of the 4-D light field. Rendered images are also a 2-D subset of the 4-D lumigraph. 3

A cube of light All light from an object can be represented as if it were coming from a cube Each point on the cube has a a 2-D set of rays emanating from the cube. Modeling: Move camera center over a 2-D surface. 2-D + 2-D -> 4-D 4

Parameterization ray space Point / angle Two points on a sphere Points on two planes Original images and camera positions Plucker Coordinates Two plane parameterization Rays are parameterized by where they intersect these planes. Any point in the 4D Light field/lumigraph is identified by its coordinates (s,t,u,v) Continuous light field L(s,t,u,v) Cube around object six slabs. Note that the lumigraph and light field papers interchange roles of (s,t) and (u,v) 5

The Ray Space Bear in mind that a ray in R 3 is represented as a point in the 4-D ray space (s,t,u,v). Constructing the light field Move camera over 2-D surface and sample 1.Build a gantry (light field 2.Use vision techniques to estimate camera position non-uniform sampling (lumigraph) 3.Rendered images (both) 6

Light Field Rig Capture: Computer-controlled camera rig Move camera to grid of locations on a plane 7

Two views of Light Field 8

Rendering For each desired ray: Compute intersection with (u,v) and (s,t) planes Take radiance at closest ray Can be computed using texture mapping hardware Variants: interpolation Bilinear in (u,v) only Bilinear in (s,t) only Quadrilinear in (u,v,s,t) Apply band pass filter to remove HF noise that may cause aliasing Compression Two-stage decompression Decode entropy coding while loading file End up with codebook and code indices packed into 16- bit words. De-quantization Engine requests samples of light field Subscript index calculated Look up index in codebook Get vector of sample values Subscript again for requested sample Digitization Slabs 4 Images/Slab 32x16 Pixels/Image 256x256 Total Size 402 MB Compression VQ coding 24:1 Entropy coding 5:1 Total 120:1 Comp. Size 3.4 MB 9

Lumigraph: : Non-uniform samplng 10

Sampling of rays is discrete Samples in the image (pixels) are not continuous The camera location is not continuous We have gaps in our lumigraph Lumigraph Rendering Use rough depth information to improve rendering quality 11

Lumigraph Rendering Use rough depth information to improve rendering quality Lumigraph Construction Obtain rough geometric model Chroma keying (blue screen) to extract silhouettes Octree-based space carving Resample images to two-plane parameterization 12

Camera Calibration and Pose Estimation A fixed lens is used to keep intrinsic parameters constant throughout the process Extrinsic parameters change constantly A previously developed algorithm in combination with a special capture stage is used to compute the Extrinsic parameters 3D Shape Approximation Only a rough estimate of the shape is needed The octree construction algorithm is used The resulting volume construction is pictured above 13

Lumigraph Rendering Without using geometry Using approximate geometry Rendered stereo pair using lumigraph 14

Play light field movie Dynamically Reparameterized Light Fields 15

New parameterizations Interactive Rendering Moderately sampled light fields Significant, unknown depth variation Passive autostereoscopic viewing fly s eye lens array Computation for synthesizing a new view comes directly from optics of display device To achieve desired flexibility, do not perform aperture filter from original paper. Light Slabs (Again) Two-plane parameterization impacts reconstruction filters Aperture filtering removes high frequency data But if plane is wrong place, only stores blurred version of scene 16

Given a ray r, can find rays (s,t,u,v ) and (s,t,u,v ) in D s,t and D s,t that intersect at the same (f,g) F Combine values with a filter In practice, use more rays and weight the filter accordingly Ray Construction Variable Focus Can create variable focus effects Multiple focal surfaces Can use one or many simultaneously Rendering ray r 17

Focal plane at multiple depths Variable Aperture Real camera aperture produce depth-of-field effects Emulate depth-of-field effects by combining rays from several cameras Combine samples for each D s,t in the synthetic aperture to produce image 1. Render r with aperture A : in focus 2. Render r with aperature A : out of focus 18

Aperture Size Small aperture Large aperture MovingToys animation: Focal plane, camera orientation, and camera position change 19

By using the largest possible aperture, images are created with dramatic (limited) depth of field. These images use an aperture that incorporates all 256 cameras evenly. Note that we can see through the tree when the focal plane is at the hills in the background. Autostereoscopic Light Fields Can think of the parameterization as a integral photograph, or the fly s eye view Each lenslet treated as single pixel 20

View-Dependent Color Each lenslet can have view dependent color Draw a ray through the principle point of the lenslet Use paraxial approximation to determine which color will be seen from a direction Unstructured Lumigraph Rendering Unstructured lumigraph rendering Chris Buehler, Michael Bosse, Leonard McMillan, Steven Gortler, Michael Cohen SIGGRAPH 2001, pp: 425 432 Further enhancement of lumigraphs: do not use two-plane parameterization Store original pictures: no resampling Hand-held camera, moved around an environment Plenoptic sampling, Jin-Xiang Chai, Shing-Chow Chan, Heung-Yeung Shum, Xin Tong, SIGGRAPH 2000, pp. 307 318 21