Computational Photography Matthias Zwicker University of Bern Fall 2010
Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application
Introduction Pinhole camera captures all light rays passing through one point
Introduction What if we could capture all light rays in a scene? Plenoptic function: information available to an observer at any point in space and time [Adelson, Bergen] http://web.mit.edu/persci/people/adelson/pub_pdfs/elements91.pdf
Plenoptic function Most general case: function of 7 variables Ray direction (2 degrees of freedom, dof ) Ray origin (3 dof) Wavelength (1 dof) Time 1 (dof) Value stored is radiance
Plenoptic function 2D image is a 2D slice of plenoptic function Perspective images: fix camera position, vary ray direction Other arrangements a e are possible Multi-viewpoint panoramas http://grail.cs.washington.edu/projects/multipano/
Applications Panoramas Multi-viewpoint, spherical,... Image based rendering (free choice of viewpoint) Interactive walkthroughs Two panoramas from nearby viewpoints Two panoramas from nearby viewpoints http://www.cs.unc.edu/~mcmillan/papers/sig95_mcmillan.pdf
Applications Computing images using signal processing E.g., refocusing http://graphics.stanford.edu/papers/lfcamera/ Depth extraction 2D slices of the plenoptic function Depth map http://www-bcs.mit.edu/people/jyawang/demos/plenoptic/plenoptic.html
Applications Autostereoscopic 3D displays, no glasses Holografika (holographic projection screens) http://www.holografika.com/ holografika Alioscopy (microlens arrays) www.alioscopy.com
Capturing the plenoptic function Use many cameras at the same time http://www.timesplice.com.au/home.html /h h l http://graphics.stanford.edu/projects/array/ Plenoptic camera http://www-bcs.mit.edu/people/jyawang/demos/plenoptic/plenoptic.html
Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application
Light fields Want to capture, process, display plenoptic function Need suitable representation to operate on Light fields based on two-plane parameterization
Two-plane parameterization Assumption: record light rays only outside convex hull of a scene Need only 2 dof for origin of light ray, since radiance along ray does not change Ray geometry has 4 dof instead of 5 dof Light fields are 4D functions Not recorded Convex hull Scene geometry Recorded ray
Two-plane parameterization Ray geometry represented by intersection location with two parallel planes Can interpret as 2 dof for origin, 2 dof for direction Each ray stores RGB radiance Levoy, Hanrahan, Light field rendering http://graphics.stanford.edu/papers/light/light-lores-corrected.pdf
Two-plane parameterization Uniform sampling: usually cameras on uniform grid on (u,v) plane Scene behind (s,t), plane Levoy, Hanrahan, Light field rendering http://graphics.stanford.edu/papers/light/light-lores-corrected.pdf
Acquiring light fields Sample directly on uniform grid v Each camera corresponds to one sample point on uv-plane Otherwise, resampling necessary u
Two-plane parameterization Uniform sampling on (u,v) and (s,t) plane Camera plane Camera plane Levoy, Hanrahan, Light field rendering http://graphics.stanford.edu/papers/light/light-lores-corrected.pdf
Two-plane parameterization Spatial resolution Resolution of each image Number of samples on (s,t) plane, usually a few million Camera plane Angular resolution Number of cameras, ( nr. of directions each scene point is seen from ) Number of samples on (u,v) plane, usually a few dozens to hundreds Camera plane
Light field rendering Given uniformly sampled light field Place camera arbitrarlily relative to parameterization planes Desired image consists of a set of rays through pixel centers Rendering requires interpolation In 4D, each ray given by 4 parameters Desired rays Levoy, Hanrahan, Light field rendering http://graphics.stanford.edu/papers/light/light-lores-corrected.pdf
Interpolation (2D illustration) st st st uv uv uv Virtual camera ray Virtual camera ray Virtual camera ray Nearest neighbor uv bilinear uvst quadrilinear http://graphics.stanford.edu/papers/light/
Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application
Signal processing analysis Want to understand sampling, reconstruction, aliasing of light fields similar as for 2D images Benefits Antialiasing i algorithms High quality filters Efficient algorithms Chai et al Plenoptic sampling Chai et al., Plenoptic sampling http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm
Ray space (2D illustration) A ray is a point in ray space Origin of v relative to t Ray 0 0 Two plane parameterization Ray space Caution: different arrangements occur in literature!
Ray space All rays through a point form a line in ray space Slope inversely related to depth Two plane parameterization Ray space Point towards infinity: slope becomes horizontal Point on v-axis: slope is diagonal
Ray space Scene with constant depth forms a sheared structure in ray space Two plane parameterization Ray space
Ray space Visualize ray space of one scanline v Moving viewpoint Scanline Scene with several depth layers Ray space, epipolar plane image, EPI t http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm
Fourier analysis Fourier transform Ray space Light field spectrum
Fourier analysis Scenes with bounded depth range v Zero power EPI of one scanline t Zero power Light field spectrum, bow tie
Fourier analysis v Scene with several depth layers Ray space t Fourier transform http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm
Sampling and reconstruction Continuous reconstruction allows interpolation Rendering of new views Before: quadrilinear interpolation/reconstruction Can we do better? Desired rays st uv uvst quadrilinear reconstruction filter
Sampling and reconstruction Sampling leads to replication of spectra Reconstruction is multiplication with filter spectrum Overlap with non-central replicas is aliasing (Idealized) spectrum of reconstruction filter Reconstruction with aliasing Aliasing appears as double edges
Sampling and reconstruction Sheared reconstruction filter to match depth of scene allows lower sampling rate Higher sampling rate isotropic i reconstruction ti filter, aliasing Lower sampling rate, optimal reconstruction ti filter, no aliasing
Sheared reconstruction No aliasing at lower sampling rate http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm
Sheared reconstruction Shearing reconstruction filter equivalent to shearing signal before filtering Visualization using 3D light field example Only horizontal camera movement 3D light field interpreted as image stack
Sheared reconstruction Input Reconstruction: filtering across images Shear determines which depth is aligned with filter
Filter visualization No shearing, high bandwidth in t and v Aliasing, as before
Filter visualization Sheared, high bandwidth in t and v No(or reduced) aliasing
Filter visualization Sheared, low bandwidth in t (angle) Large synthetic aperture
Synthetic aperture Small synthetic aperture Large synthetic aperture
Synthetic aperture The scene depth aligned with sheared filter is sharp, in focus Can choose focus arbitrarily by varying shear! Size of filter determines es depth of field Adjusting shear of large synthetic aperture filter to change focus http://groups.csail.mit.edu/graphics/pubs/siggraph2000_drlf.pdf
Filter visualization Sheared, low bandwidth in t and v Blurry images
Note Shearing the filter and shearing the signal is equivalent Sheared filter Sheared signal
Observation Often, light fields are undersampled, or aliased Limited it number of viewpoints i Sheared reconstruction may still be aliased! Aliasing, overlap of reconstruction filter with replicas Aliasing, double images
Reconstruction without aliasing Avoid aliasing by reducing spatial or angular bandwidth Low spatial bandwidth, blurry image Low angular bandwidth (wide aperture), shallow depth of field
Reconstruction without aliasing Low spatial bandwidth, blurry image Low angular bandwidth, shallow depth of field
Improved reconstruction Construct reconstruction filter that preserves as much of signal as possible Improved reconstruction for undersampled (aliased) light fields Improved reconstruction for undersampled (aliased) light fields http://www.cs.unc.edu/~mcmillan/papers/egsr03_stewart.pdf
Improved reconstruction Blurring (low-pass filtering) images separately Low bandwidth in v Large aperture filter Low bandwidth in t Large aperture filtering and blurring * = Order doesn t matter
Improved reconstruction Frequency domain construction - * = = + =
Spatial domain implementation 1. Wide aperture filtering on original input images 2. Compute set of blurred input images 3. Wide aperture filter of blurred images 4. Subtract result of 3 from 1 5. Add result of 4 to 2 In practice filters implemented eg In practice, filters implemented e.g. using Gaussians
Improved reconstruction Blurry Large aperture Improved reconstruction ti reconstruction ti reconstruction ti
Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application
Light field cameras Already described by Lipman, 1908 Using microlens array http://people.csail.mit.edu/fredo/publi/lippmann http://www.tgeorgiev.net/radiancecameras/epreuvesreversibles.pdftgeorgiev pdf Similar construction by Adelson & Wang, 1992 For depth reconstruction
Light field cameras Camera arrays
Light field cameras Robotic cameras Restricted to static scenes
Light field cameras Hand-held cameras Microlens array mounted on top of camera sensor Commercial product http://raytrix.de/ t / Camera with microlens array http://graphics.stanford.edu/papers/lfcamera/
Light field cameras Hand-held cameras Sensor pixels Scene point http://www.tgeorgiev.net/fullresolution.pdf Each sensor pixel of a microlens observes es scene point from slightly different viewpoint Viewpoints distributed over aperture of main lens Viewpoints distributed over aperture of main lens Other arrangements possible
Light field cameras Sensor data Close-up http://graphics.stanford.edu/papers/lfcamera/
Light field cameras http://graphics.stanford.edu/papers/fourierphoto/fourierphoto-600dpi.pdf
Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application
Digital refocusing Change focus of image after capture http://graphics.stanford.edu/papers/fourierphoto/fourierphoto-600dpi.pdf
Digital refocusing Change focus of image after capture http://graphics.stanford.edu/papers/fourierphoto/fourierphoto-600dpi.pdf
Digital refocusing As before: adjust shear of reconstruction filter Shear determines focusing distance But careful, different parameterization Now: ray tracing explanation
Digital refocusing
Digital refocusing
Digital refocusing
Digital refocusing
3D displays Displays that show light fields Use view dependent pixels Pixels show dff different color dependingd on viewing angle Autostereoscopic viewing: left and right eyes see different color at each pixel No glasses! Spatial resolution: how many pixels Angular resolution: how many direction per pixel View dependent pixels Left eye Right eye
Integral imaging First 3D autostereoscopc displays [Lippmann 1908] http://en.wikipedia.org/wiki/integral_imaging Ui Using lenticular l optics Eachlensisa viewdependent pixel Same two-plane parameterization to describe rays (u,v) (s,t) (u,v) (s,t) http://groups.csail.mit.edu/graphics/pubs/siggraph2000_drlf.pdf
Integral imaging Place image that encodes light field data behind lenticular sheet (lens array) Lens array Encoded light field without lenticular sheet Image from certain viewing postion
Parallax barriers Same concept as lenticulars, different implementation Two-plane parameterization to describe rays (s,t) (u,v)
Limitations & outlook Parallax barriers & lenticulars suffer from poor resolution Low angular resolution: only few (<10) directional (angular) samples per pixel Aliasing las problems, poble ghosting g Holographic screens Higher angular resolution Maybe this is the future? See http://www.holografika.com/
Summary Plenoptic function: theoretical concept that captures complete information of light distribution in a scene Light fields: practical representation of plenoptic function restricted to 4D Two plane parameterization Convenient to capture, process Signal processing analysis, bow tie spectrum Sheared reconstruction, synthetic aperture filters Light field cameras Camera arrays, hand-held camera Applications Rendering images from novel viewpoints After the fact processing for photography, refocusing Other processing, depth extraction 3D displays
Next time From 2D images to 3D geometry