Computational Photography

Similar documents
Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Modeling Light. Slides from Alexei A. Efros and others

Image-Based Rendering

Image-based modeling (IBM) and image-based rendering (IBR)

Modeling Light. Michal Havlik

Plenoptic camera and its Applications

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

The Focused Plenoptic Camera. Lightfield photographers, focus your cameras! Karl Marx

Jingyi Yu CISC 849. Department of Computer and Information Science

Modeling Light. Michal Havlik

Modeling Light. On Simulating the Visual Experience

Focal stacks and lightfields

More and More on Light Fields. Last Lecture

Image or Object? Is this real?

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Light Field Spring

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Radiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University

AS the most important medium for people to perceive

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Image-Based Modeling and Rendering

Multi-View Omni-Directional Imaging

Distribution Ray-Tracing. Programação 3D Simulação e Jogos

Computational Photography

A Frequency Analysis of Light Transport

AN O(N 2 LOG(N)) PER PLANE FAST DISCRETE FOCAL STACK TRANSFORM

The Light Field and Image-Based Rendering

Computational Methods for Radiance. Render the full variety offered by the direct observation of objects. (Computationally).

Structure from Motion and Multi- view Geometry. Last lecture

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

Computational Photography: Real Time Plenoptic Rendering

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

Dense Lightfield Disparity Estimation using Total Variation Regularization

Plenoptic Cameras. Bastian Goldlücke, Oliver Klehm, Sven Wanner, and Elmar Eisemann. 5.1 Introduction

Mosaics. Today s Readings

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Announcements. Mosaics. Image Mosaics. How to do it? Basic Procedure Take a sequence of images from the same position =

L16. Scan Matching and Image Formation

GPU assisted light field capture and processing

Coding and Modulation in Cameras

Image-Based Modeling and Rendering

Computer Graphics. Texture Filtering & Sampling Theory. Hendrik Lensch. Computer Graphics WS07/08 Texturing

Optimization of the number of rays in interpolation for light field based free viewpoint systems

Title: The Future of Photography is Computational Photography. Subtitle: 100 years after invention, Integral Photography becomes feasible

TREE-STRUCTURED ALGORITHM FOR EFFICIENT SHEARLET-DOMAIN LIGHT FIELD RECONSTRUCTION. Suren Vagharshakyan, Robert Bregovic, Atanas Gotchev

Linearizing the Plenoptic Space

An Approach on Hardware Design for Computationally Intensive Image Processing Applications based on Light Field Refocusing Algorithm

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Theoretically Perfect Sensor

Projective Geometry and Camera Models

EECS 487: Interactive Computer Graphics

IRW: An Incremental Representation for Image-Based Walkthroughs

distribution ray-tracing


Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

A Warping-based Refinement of Lumigraphs

Light Field Techniques for Reflections and Refractions

Advanced Computer Graphics Transformations. Matthias Teschner

Announcements. Mosaics. How to do it? Image Mosaics

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Multi-Camera Plenoptic Particle Image Velocimetry

CS201 Computer Vision Lect 4 - Image Formation

L1a: Introduction to Light Fields

A unified approach for motion analysis and view synthesis Λ

Volumetric Scene Reconstruction from Multiple Views

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian

Light Field Super Resolution with Convolutional Neural Networks

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects

Sampling and Reconstruction

Projective Geometry and Camera Models

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

CSCI 1290: Comp Photo

Ray Tracing. CS334 Fall Daniel G. Aliaga Department of Computer Science Purdue University

High-Quality Interactive Lumigraph Rendering Through Warping

Image Base Rendering: An Introduction

distribution ray tracing

Computer Vision I - Image Matching and Image Formation

Theoretically Perfect Sensor

3D Surface Reconstruction Based On Plenoptic Image. Haiqiao Zhang

Depth Estimation of Semi-submerged Objects Using a Light Field Camera. Juehui Fan

Programming projects. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer

3D Volume Reconstruction From 2D Plenoptic Data Using FFT-Based Methods. John Paul Anglin

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

WATERMARKING FOR LIGHT FIELD RENDERING 1

Today. Texture mapping in OpenGL. Texture mapping. Basic shaders for texturing. Today. Computergrafik

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Light Field Rendering

Realistic Camera Model

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Computer Graphics. Lecture 9 Environment mapping, Mirroring

Computer Vision for Computer Graphics

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University

Multi-View Stereo for Static and Dynamic Scenes

Transcription:

Computational Photography Matthias Zwicker University of Bern Fall 2010

Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application

Introduction Pinhole camera captures all light rays passing through one point

Introduction What if we could capture all light rays in a scene? Plenoptic function: information available to an observer at any point in space and time [Adelson, Bergen] http://web.mit.edu/persci/people/adelson/pub_pdfs/elements91.pdf

Plenoptic function Most general case: function of 7 variables Ray direction (2 degrees of freedom, dof ) Ray origin (3 dof) Wavelength (1 dof) Time 1 (dof) Value stored is radiance

Plenoptic function 2D image is a 2D slice of plenoptic function Perspective images: fix camera position, vary ray direction Other arrangements a e are possible Multi-viewpoint panoramas http://grail.cs.washington.edu/projects/multipano/

Applications Panoramas Multi-viewpoint, spherical,... Image based rendering (free choice of viewpoint) Interactive walkthroughs Two panoramas from nearby viewpoints Two panoramas from nearby viewpoints http://www.cs.unc.edu/~mcmillan/papers/sig95_mcmillan.pdf

Applications Computing images using signal processing E.g., refocusing http://graphics.stanford.edu/papers/lfcamera/ Depth extraction 2D slices of the plenoptic function Depth map http://www-bcs.mit.edu/people/jyawang/demos/plenoptic/plenoptic.html

Applications Autostereoscopic 3D displays, no glasses Holografika (holographic projection screens) http://www.holografika.com/ holografika Alioscopy (microlens arrays) www.alioscopy.com

Capturing the plenoptic function Use many cameras at the same time http://www.timesplice.com.au/home.html /h h l http://graphics.stanford.edu/projects/array/ Plenoptic camera http://www-bcs.mit.edu/people/jyawang/demos/plenoptic/plenoptic.html

Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application

Light fields Want to capture, process, display plenoptic function Need suitable representation to operate on Light fields based on two-plane parameterization

Two-plane parameterization Assumption: record light rays only outside convex hull of a scene Need only 2 dof for origin of light ray, since radiance along ray does not change Ray geometry has 4 dof instead of 5 dof Light fields are 4D functions Not recorded Convex hull Scene geometry Recorded ray

Two-plane parameterization Ray geometry represented by intersection location with two parallel planes Can interpret as 2 dof for origin, 2 dof for direction Each ray stores RGB radiance Levoy, Hanrahan, Light field rendering http://graphics.stanford.edu/papers/light/light-lores-corrected.pdf

Two-plane parameterization Uniform sampling: usually cameras on uniform grid on (u,v) plane Scene behind (s,t), plane Levoy, Hanrahan, Light field rendering http://graphics.stanford.edu/papers/light/light-lores-corrected.pdf

Acquiring light fields Sample directly on uniform grid v Each camera corresponds to one sample point on uv-plane Otherwise, resampling necessary u

Two-plane parameterization Uniform sampling on (u,v) and (s,t) plane Camera plane Camera plane Levoy, Hanrahan, Light field rendering http://graphics.stanford.edu/papers/light/light-lores-corrected.pdf

Two-plane parameterization Spatial resolution Resolution of each image Number of samples on (s,t) plane, usually a few million Camera plane Angular resolution Number of cameras, ( nr. of directions each scene point is seen from ) Number of samples on (u,v) plane, usually a few dozens to hundreds Camera plane

Light field rendering Given uniformly sampled light field Place camera arbitrarlily relative to parameterization planes Desired image consists of a set of rays through pixel centers Rendering requires interpolation In 4D, each ray given by 4 parameters Desired rays Levoy, Hanrahan, Light field rendering http://graphics.stanford.edu/papers/light/light-lores-corrected.pdf

Interpolation (2D illustration) st st st uv uv uv Virtual camera ray Virtual camera ray Virtual camera ray Nearest neighbor uv bilinear uvst quadrilinear http://graphics.stanford.edu/papers/light/

Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application

Signal processing analysis Want to understand sampling, reconstruction, aliasing of light fields similar as for 2D images Benefits Antialiasing i algorithms High quality filters Efficient algorithms Chai et al Plenoptic sampling Chai et al., Plenoptic sampling http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm

Ray space (2D illustration) A ray is a point in ray space Origin of v relative to t Ray 0 0 Two plane parameterization Ray space Caution: different arrangements occur in literature!

Ray space All rays through a point form a line in ray space Slope inversely related to depth Two plane parameterization Ray space Point towards infinity: slope becomes horizontal Point on v-axis: slope is diagonal

Ray space Scene with constant depth forms a sheared structure in ray space Two plane parameterization Ray space

Ray space Visualize ray space of one scanline v Moving viewpoint Scanline Scene with several depth layers Ray space, epipolar plane image, EPI t http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm

Fourier analysis Fourier transform Ray space Light field spectrum

Fourier analysis Scenes with bounded depth range v Zero power EPI of one scanline t Zero power Light field spectrum, bow tie

Fourier analysis v Scene with several depth layers Ray space t Fourier transform http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm

Sampling and reconstruction Continuous reconstruction allows interpolation Rendering of new views Before: quadrilinear interpolation/reconstruction Can we do better? Desired rays st uv uvst quadrilinear reconstruction filter

Sampling and reconstruction Sampling leads to replication of spectra Reconstruction is multiplication with filter spectrum Overlap with non-central replicas is aliasing (Idealized) spectrum of reconstruction filter Reconstruction with aliasing Aliasing appears as double edges

Sampling and reconstruction Sheared reconstruction filter to match depth of scene allows lower sampling rate Higher sampling rate isotropic i reconstruction ti filter, aliasing Lower sampling rate, optimal reconstruction ti filter, no aliasing

Sheared reconstruction No aliasing at lower sampling rate http://graphics.cs.cmu.edu/projects/plenoptic-sampling/ps_projectpage.htm

Sheared reconstruction Shearing reconstruction filter equivalent to shearing signal before filtering Visualization using 3D light field example Only horizontal camera movement 3D light field interpreted as image stack

Sheared reconstruction Input Reconstruction: filtering across images Shear determines which depth is aligned with filter

Filter visualization No shearing, high bandwidth in t and v Aliasing, as before

Filter visualization Sheared, high bandwidth in t and v No(or reduced) aliasing

Filter visualization Sheared, low bandwidth in t (angle) Large synthetic aperture

Synthetic aperture Small synthetic aperture Large synthetic aperture

Synthetic aperture The scene depth aligned with sheared filter is sharp, in focus Can choose focus arbitrarily by varying shear! Size of filter determines es depth of field Adjusting shear of large synthetic aperture filter to change focus http://groups.csail.mit.edu/graphics/pubs/siggraph2000_drlf.pdf

Filter visualization Sheared, low bandwidth in t and v Blurry images

Note Shearing the filter and shearing the signal is equivalent Sheared filter Sheared signal

Observation Often, light fields are undersampled, or aliased Limited it number of viewpoints i Sheared reconstruction may still be aliased! Aliasing, overlap of reconstruction filter with replicas Aliasing, double images

Reconstruction without aliasing Avoid aliasing by reducing spatial or angular bandwidth Low spatial bandwidth, blurry image Low angular bandwidth (wide aperture), shallow depth of field

Reconstruction without aliasing Low spatial bandwidth, blurry image Low angular bandwidth, shallow depth of field

Improved reconstruction Construct reconstruction filter that preserves as much of signal as possible Improved reconstruction for undersampled (aliased) light fields Improved reconstruction for undersampled (aliased) light fields http://www.cs.unc.edu/~mcmillan/papers/egsr03_stewart.pdf

Improved reconstruction Blurring (low-pass filtering) images separately Low bandwidth in v Large aperture filter Low bandwidth in t Large aperture filtering and blurring * = Order doesn t matter

Improved reconstruction Frequency domain construction - * = = + =

Spatial domain implementation 1. Wide aperture filtering on original input images 2. Compute set of blurred input images 3. Wide aperture filter of blurred images 4. Subtract result of 3 from 1 5. Add result of 4 to 2 In practice filters implemented eg In practice, filters implemented e.g. using Gaussians

Improved reconstruction Blurry Large aperture Improved reconstruction ti reconstruction ti reconstruction ti

Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application

Light field cameras Already described by Lipman, 1908 Using microlens array http://people.csail.mit.edu/fredo/publi/lippmann http://www.tgeorgiev.net/radiancecameras/epreuvesreversibles.pdftgeorgiev pdf Similar construction by Adelson & Wang, 1992 For depth reconstruction

Light field cameras Camera arrays

Light field cameras Robotic cameras Restricted to static scenes

Light field cameras Hand-held cameras Microlens array mounted on top of camera sensor Commercial product http://raytrix.de/ t / Camera with microlens array http://graphics.stanford.edu/papers/lfcamera/

Light field cameras Hand-held cameras Sensor pixels Scene point http://www.tgeorgiev.net/fullresolution.pdf Each sensor pixel of a microlens observes es scene point from slightly different viewpoint Viewpoints distributed over aperture of main lens Viewpoints distributed over aperture of main lens Other arrangements possible

Light field cameras Sensor data Close-up http://graphics.stanford.edu/papers/lfcamera/

Light field cameras http://graphics.stanford.edu/papers/fourierphoto/fourierphoto-600dpi.pdf

Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application

Digital refocusing Change focus of image after capture http://graphics.stanford.edu/papers/fourierphoto/fourierphoto-600dpi.pdf

Digital refocusing Change focus of image after capture http://graphics.stanford.edu/papers/fourierphoto/fourierphoto-600dpi.pdf

Digital refocusing As before: adjust shear of reconstruction filter Shear determines focusing distance But careful, different parameterization Now: ray tracing explanation

Digital refocusing

Digital refocusing

Digital refocusing

Digital refocusing

3D displays Displays that show light fields Use view dependent pixels Pixels show dff different color dependingd on viewing angle Autostereoscopic viewing: left and right eyes see different color at each pixel No glasses! Spatial resolution: how many pixels Angular resolution: how many direction per pixel View dependent pixels Left eye Right eye

Integral imaging First 3D autostereoscopc displays [Lippmann 1908] http://en.wikipedia.org/wiki/integral_imaging Ui Using lenticular l optics Eachlensisa viewdependent pixel Same two-plane parameterization to describe rays (u,v) (s,t) (u,v) (s,t) http://groups.csail.mit.edu/graphics/pubs/siggraph2000_drlf.pdf

Integral imaging Place image that encodes light field data behind lenticular sheet (lens array) Lens array Encoded light field without lenticular sheet Image from certain viewing postion

Parallax barriers Same concept as lenticulars, different implementation Two-plane parameterization to describe rays (s,t) (u,v)

Limitations & outlook Parallax barriers & lenticulars suffer from poor resolution Low angular resolution: only few (<10) directional (angular) samples per pixel Aliasing las problems, poble ghosting g Holographic screens Higher angular resolution Maybe this is the future? See http://www.holografika.com/

Summary Plenoptic function: theoretical concept that captures complete information of light distribution in a scene Light fields: practical representation of plenoptic function restricted to 4D Two plane parameterization Convenient to capture, process Signal processing analysis, bow tie spectrum Sheared reconstruction, synthetic aperture filters Light field cameras Camera arrays, hand-held camera Applications Rendering images from novel viewpoints After the fact processing for photography, refocusing Other processing, depth extraction 3D displays

Next time From 2D images to 3D geometry