Modeling Light. Michal Havlik

Similar documents
Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

Modeling Light. Michal Havlik

Modeling Light. Slides from Alexei A. Efros and others

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Modeling Light. On Simulating the Visual Experience

CSCI 1290: Comp Photo

Light Field Spring

Image or Object? Is this real?

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

Image-Based Rendering

Computational Photography

Plenoptic camera and its Applications

Announcements. Light. Properties of light. Light. Project status reports on Wednesday. Readings. Today. Readings Szeliski, 2.2, 2.3.

Image-Based Modeling and Rendering

Image-Based Modeling and Rendering

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

Image-based modeling (IBM) and image-based rendering (IBR)

Computational Photography: Real Time Plenoptic Rendering

AN O(N 2 LOG(N)) PER PLANE FAST DISCRETE FOCAL STACK TRANSFORM

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image Warping and Mosacing

Re-live the Movie Matrix : From Harry Nyquist to Image-Based Rendering. Tsuhan Chen Carnegie Mellon University Pittsburgh, USA

Midterm Examination CS 534: Computational Photography

More and More on Light Fields. Last Lecture

Focal stacks and lightfields

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

A Warping-based Refinement of Lumigraphs

IMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS

Coding and Modulation in Cameras

More Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin.

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Image Based Rendering. D.A. Forsyth, with slides from John Hart

Computational Photography

Mosaics, Plenoptic Function, and Light Field Rendering. Last Lecture

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN

Radiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University

New Frontier in Visual Communication and Networking: Multi-View Imaging. Professor Tsuhan Chen 陳祖翰

The Focused Plenoptic Camera. Lightfield photographers, focus your cameras! Karl Marx

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Jingyi Yu CISC 849. Department of Computer and Information Science

Mosaics. Today s Readings

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005

Image Stitching. Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi

Broad field that includes low-level operations as well as complex high-level algorithms

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?

Announcements. Mosaics. Image Mosaics. How to do it? Basic Procedure Take a sequence of images from the same position =

Multiple View Geometry

Computational Photography

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

All good things must...

Image-Based Rendering and Light Fields

CS 684 Fall 2005 Image-based Modeling and Rendering. Ruigang Yang

Volumetric Scene Reconstruction from Multiple Views

A million pixels, a million polygons. Which is heavier? François X. Sillion. imagis* Grenoble, France

CS5670: Computer Vision

Structure from Motion and Multi- view Geometry. Last lecture

Video Texture. A.A. Efros

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

CSc Topics in Computer Graphics 3D Photography

Motion Estimation and Optical Flow Tracking

The Light Field and Image-Based Rendering

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Ligh%ng and Reflectance

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

CS4670: Computer Vision

An Approach on Hardware Design for Computationally Intensive Image Processing Applications based on Light Field Refocusing Algorithm

CS6670: Computer Vision

Image Base Rendering: An Introduction

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

Image Morphing. Application: Movie Special Effects. Application: Registration /Alignment. Image Cross-Dissolve

(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology

Advanced Image Based Rendering Techniques

Homographies and RANSAC

Optimizing an Inverse Warper

Computational Methods for Radiance. Render the full variety offered by the direct observation of objects. (Computationally).

3D Particle Position Reconstruction Accuracy in Plenoptic PIV

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

CS4670: Computer Vision

CS6670: Computer Vision

Announcements. Mosaics. How to do it? Image Mosaics

Projective Geometry and Camera Models

Digital Image Processing

Image Based Rendering

Dense Lightfield Disparity Estimation using Total Variation Regularization

Image warping and stitching

Image-based Rendering with Controllable Illumination

Temporal Resolution. Flicker fusion threshold The frequency at which an intermittent light stimulus appears to be completely steady to the observer

Image-Based Rendering and Modeling. IBR Approaches for View Synthesis

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

Announcements. Hough Transform [ Patented 1962 ] Generalized Hough Transform, line fitting. Assignment 2: Due today Midterm: Thursday, May 5 in class

More Single View Geometry

Image Composition. COS 526 Princeton University

Automatic Image Alignment (direct) with a lot of slides stolen from Steve Seitz and Rick Szeliski

Light Field Techniques for Reflections and Refractions

Introduction to 3D Concepts

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.

Texture. Texture. 2) Synthesis. Objectives: 1) Discrimination/Analysis

More details on presentations

Transcription:

Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007

What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power (watts) λ is wavelength Useful things: Light travels in straight lines In vacuum, radiance emitted = radiance arriving i.e. there is no transmission loss

What do we see? 3D world 2D image Point of observation Figures Stephen E. Palmer, 2002

What do we see? 3D world 2D image Painted backdrop

On Simulating the Visual Experience Just feed the eyes the right data No one will know the difference! Philosophy: Ancient question: Does the world really exist? Science fiction: Many, many, many books on the subject, e.g. slowglass Latest take: The Matrix Physics: Slowglass might be possible? Computer Science: Virtual Reality To simulate we need to know: What does a person see?

The Plenoptic Function Figure by Leonard McMillan Q: What is the set of all things that we can ever see? A: The Plenoptic Function (Adelson & Bergen) Let s start with a stationary person and try to parameterize everything that he can see

Grayscale snapshot is intensity of light P(θ,φ) Seen from a single view point At a single time Averaged over the wavelengths of the visible spectrum (can also do P(x,y), but spherical coordinate are nicer)

Color snapshot is intensity of light P(θ,φ,λ) Seen from a single view point At a single time As a function of wavelength

A movie is intensity of light P(θ,φ,λ,t) Seen from a single view point Over time As a function of wavelength

Holographic movie is intensity of light Seen from ANY viewpoint Over time As a function of wavelength P(θ,φ,λ,t,V X,V Y,V Z )

The Plenoptic Function P(θ,φ,λ,t,V X,V Y,V Z ) Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen! it completely captures our visual reality! Not bad for a function

Sampling Plenoptic Function (top view) Just lookup -- Quicktime VR

Ray Let s not worry about time and color: 5D 3D position 2D direction P(θ,φ,V X,V Y,V Z ) Slide by Rick Szeliski and Michael Cohen

How can we use this? Lighting No Change in Radiance Surface Camera

Ray Reuse Infinite line Assume light is constant (vacuum) 4D 2D direction 2D position non-dispersive medium Slide by Rick Szeliski and Michael Cohen

Only need plenoptic surface

Synthesizing novel views Slide by Rick Szeliski and Michael Cohen

Lumigraph / Lightfield Outside convex space 4D Empty Stuff Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization 2D position 2D direction θ s Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization 2D position 2D position s u 2 plane parameterization Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization 2D position 2D position t s,t s,t u,v v 2 plane parameterization s u u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization Hold s,t constant Let u,v vary An image s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph / Lightfield

Lumigraph - Capture Idea 1 Move camera carefully over s,t plane Gantry see Lightfield paper s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph - Capture Idea 2 Move camera anywhere Rebinning see Lumigraph paper s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph - Rendering For each output pixel determine s,t,u,v either use closest discrete RGB interpolate near values s u Slide by Rick Szeliski and Michael Cohen

Lumigraph - Rendering Nearest closest s closest u draw it Blend 16 nearest quadrilinear interpolation s u Slide by Rick Szeliski and Michael Cohen

Stanford multi-camera array 640 480 pixels 30 fps 128 cameras synchronized timing continuous streaming flexible arrangement

Light field photography using a handheld plenoptic camera Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan

Conventional versus light field camera 2005 Marc Levoy

2005 Marc Levoy Conventional versus light field camera uv-plane st-plane

Prototype camera Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses 4000 4000 pixels 292 292 lenses = 14 14 pixels per lens

2005 Marc Levoy Digitally stopping-down Σ Σ stopping down = summing only the central portion of each microlens

2005 Marc Levoy Digital refocusing Σ Σ refocusing = summing windows extracted from several microlenses

Example of digital refocusing 2005 Marc Levoy

2005 Marc Levoy Digitally moving the observer Σ moving the observer = moving the window we extract from the microlenses Σ

Example of moving the observer 2005 Marc Levoy

Moving backward and forward 2005 Marc Levoy

3D Lumigraph One row of s,t plane i.e., hold t constant s,t u,v

3D Lumigraph One row of s,t plane i.e., hold t constant thus s,u,v a row of images s u,v

P(x,t) by David Dewey

2D: Image What is an image? All rays through a point Panorama? Slide by Rick Szeliski and Michael Cohen

Image Image plane 2D position

Spherical Panorama See also: 2003 New Years Eve http://www.panoramas.dk/fullscreen3/f1.html All light rays through a point form a ponorama Totally captured in a 2D array -- P(θ,φ) Where is the geometry???

Other ways to sample Plenoptic Function Moving in time: Spatio-temporal volume: P(θ,φ,t) Useful to study temporal changes Long an interest of artists: Claude Monet, Haystacks studies

Space-time images Other ways to slice the plenoptic function t y x

The Theatre Workshop Metaphor (Adelson & Pentland,1996) desired image Painter Lighting Designer Sheet-metal worker

Painter (images)

Lighting Designer (environment maps) Show Naimark SF MOMA video http://www.debevec.org/naimark/naimark-displacements.mov

Sheet-metal Worker (geometry) Let surface normals do all the work!

working together clever Italians Want to minimize cost Each one does what s easiest for him Geometry big things Images detail Lighting illumination effects

Midterm next Tuesday 80 minutes long (come on time!) Closed book, closed notes, closed laptops But can have a cheat sheet (1 page, both sides) Will cover all material up to this week!

Midterm Review Cameras Pin-hole model, camera knobs, perspective projection, orthographic (parallel) projection, etc. Capturing & modeling Light Physics of Light, light perception, color, The Plenoptic function, Lumigraph/Lightfields Image Processing Point processing, histograms, filtering, correlation, convolution, 2D Fourier transform, low-pass/band-pass/high-pass filtering, edge detection, Gaussian and laplacian pyramids, blending, etc. Image Warping and Morphing 2D parametric transformations, homogeneous coordinates, degrees of freedom, difference between affine and projective transformations, forward/inverse warping, morphing, etc. Data-driven Methods Markov Chains, using data to model the world, face modeling with basis, video and texture synthesis