Modeling Light. Slides from Alexei A. Efros and others

Similar documents
Modeling Light. Michal Havlik

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

Modeling Light. Michal Havlik

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

CSCI 1290: Comp Photo

Modeling Light. On Simulating the Visual Experience

Light Field Spring

Image or Object? Is this real?

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

Computational Photography

Image-Based Rendering

Plenoptic camera and its Applications

Announcements. Light. Properties of light. Light. Project status reports on Wednesday. Readings. Today. Readings Szeliski, 2.2, 2.3.

Computational Photography: Real Time Plenoptic Rendering

Image-Based Modeling and Rendering

Focal stacks and lightfields

More and More on Light Fields. Last Lecture

Image-based modeling (IBM) and image-based rendering (IBR)

The Focused Plenoptic Camera. Lightfield photographers, focus your cameras! Karl Marx

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image-Based Modeling and Rendering

Structure from Motion and Multi- view Geometry. Last lecture

Multiple View Geometry

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN

AN O(N 2 LOG(N)) PER PLANE FAST DISCRETE FOCAL STACK TRANSFORM

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Radiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Coding and Modulation in Cameras

DEPTH AND ANGULAR RESOLUTION IN PLENOPTIC CAMERAS. M. Damghanian, R. Olsson, M. Sjöström

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Image Based Rendering. D.A. Forsyth, with slides from John Hart

Image Warping and Mosacing

Jingyi Yu CISC 849. Department of Computer and Information Science

Optimizing an Inverse Warper

Volumetric Scene Reconstruction from Multiple Views

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Ligh%ng and Reflectance

Measuring Light: Radiometry and Cameras

Epipolar Geometry CSE P576. Dr. Matthew Brown

Title: The Future of Photography is Computational Photography. Subtitle: 100 years after invention, Integral Photography becomes feasible

Geometry of Multiple views

Computational Methods for Radiance. Render the full variety offered by the direct observation of objects. (Computationally).

CS201 Computer Vision Camera Geometry

CS5670: Computer Vision

Rectification and Disparity

The 7d plenoptic function, indexing all light.

IMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS

Dense Lightfield Disparity Estimation using Total Variation Regularization

Computational Photography

CS4670: Computer Vision

An Approach on Hardware Design for Computationally Intensive Image Processing Applications based on Light Field Refocusing Algorithm

Computational Photography

(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology

Re-live the Movie Matrix : From Harry Nyquist to Image-Based Rendering. Tsuhan Chen Carnegie Mellon University Pittsburgh, USA

CSc Topics in Computer Graphics 3D Photography

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Projective Geometry and Camera Models

DEPTH, STEREO AND FOCUS WITH LIGHT-FIELD CAMERAS

Linearizing the Plenoptic Space

CS6670: Computer Vision

The Effect of Grid Resolution on the Accuracy of Tomographic Reconstruction Using a Plenoptic Camera

More Mosaic Madness. CS194: Image Manipulation & Computational Photography. Steve Seitz and Rick Szeliski. Jeffrey Martin (jeffrey-martin.

3D Particle Position Reconstruction Accuracy in Plenoptic PIV

A million pixels, a million polygons. Which is heavier? François X. Sillion. imagis* Grenoble, France

EE795: Computer Vision and Intelligent Systems

Homographies and RANSAC

Recap from Previous Lecture

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

Image Based Rendering

A Warping-based Refinement of Lumigraphs

CS 684 Fall 2005 Image-based Modeling and Rendering. Ruigang Yang

Computer Vision Lecture 17

The Light Field and Image-Based Rendering

Computer Vision Lecture 17

Plenoptic Cameras. Bastian Goldlücke, Oliver Klehm, Sven Wanner, and Elmar Eisemann. 5.1 Introduction


Advanced Image Based Rendering Techniques

Globally Consistent Depth Labeling of 4D Light Fields

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

Image-Based Rendering and Light Fields

CS6670: Computer Vision

Depth Estimation with a Plenoptic Camera

Understanding Variability

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

CS354 Computer Graphics Ray Tracing. Qixing Huang Januray 24th 2017

Image warping and stitching

Capturing and View-Dependent Rendering of Billboard Models

AS the most important medium for people to perceive

Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1)

Announcements. Mosaics. Image Mosaics. How to do it? Basic Procedure Take a sequence of images from the same position =

Multi-View Stereo for Static and Dynamic Scenes

TREE-STRUCTURED ALGORITHM FOR EFFICIENT SHEARLET-DOMAIN LIGHT FIELD RECONSTRUCTION. Suren Vagharshakyan, Robert Bregovic, Atanas Gotchev

CS201 Computer Vision Lect 4 - Image Formation

Mosaics. Today s Readings

Transcription:

Project 3 Results http://www.cs.brown.edu/courses/cs129/results/proj3/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj3/damoreno/ http://www.cs.brown.edu/courses/cs129/results/proj3/taox/

Stereo Recap Epipolar geometry Epipoles are intersection of baseline with image planes Matching point in second image is on a line passing through its epipole Fundamental matrix maps from a point in one image to a line (its epipolar line) in the other Can solve for F given corresponding points (e.g., interest points) Stereo depth estimation Estimate disparity by finding corresponding points along scanlines Depth is inverse to disparity Projected, structured lighting can replace one of the cameras

Modeling Light Slides from Alexei A. Efros and others cs129: Computational Photography James Hays, Brown, Fall 2012

What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of power (watts) is wavelength Useful things: Light travels in straight lines In vacuum, radiance emitted = radiance arriving i.e. there is no transmission loss

What do we see? 3D world 2D image Point of observation Figures Stephen E. Palmer, 2002

What do we see? 3D world 2D image Point of observation Painted backdrop

On Simulating the Visual Experience Just feed the eyes the right data No one will know the difference! Philosophy: Ancient question: Does the world really exist? Science fiction: Many, many, many books on the subject, e.g. slowglass from Light of Other Days Latest take: The Matrix Physics: Slowglass might be possible? Computer Science: Virtual Reality To simulate we need to know: What does a person see?

The Plenoptic Function Figure by Leonard McMillan Q: What is the set of all things that we can ever see? A: The Plenoptic Function (Adelson & Bergen) Let s start with a stationary person and try to parameterize everything that he can see

Grayscale snapshot is intensity of light P( ) Seen from a single view point At a single time Averaged over the wavelengths of the visible spectrum (can also do P(x,y), but spherical coordinate are nicer)

Color snapshot is intensity of light P( ) Seen from a single view point At a single time As a function of wavelength

A movie is intensity of light P(,t) Seen from a single view point Over time As a function of wavelength

Holographic movie is intensity of light Seen from ANY viewpoint Over time As a function of wavelength P(,t,V X,V Y,V Z )

The Plenoptic Function P(,t,V X,V Y,V Z ) Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen! it completely captures our visual reality! Not bad for a function

Sampling Plenoptic Function (top view) Just lookup Google Street View

Model geometry or just capture images?

Ray Let s not worry about time and color: 5D 3D position 2D direction P( V X,V Y,V Z ) Slide by Rick Szeliski and Michael Cohen

How can we use this? Lighting No Change in Radiance Surface Camera

Ray Reuse Infinite line Assume light is constant (vacuum) 4D 2D direction 2D position non-dispersive medium Slide by Rick Szeliski and Michael Cohen

Only need plenoptic surface

Synthesizing novel views Slide by Rick Szeliski and Michael Cohen

Lumigraph / Lightfield Outside convex space 4D Empty Stuff Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization 2D position 2D direction s Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization 2D position 2D position s u 2 plane parameterization Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization 2D position 2D position t s,t s,t u,v v 2 plane parameterization s u u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph - Organization Hold s,t constant Let u,v vary An image s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph / Lightfield

Lumigraph - Capture Idea 1 Move camera carefully over s,t plane Gantry see Lightfield paper s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph - Capture Idea 2 Move camera anywhere Rebinning see Lumigraph paper s,t u,v Slide by Rick Szeliski and Michael Cohen

Lumigraph - Rendering For each output pixel determine s,t,u,v either use closest discrete RGB interpolate near values s u Slide by Rick Szeliski and Michael Cohen

Lumigraph - Rendering Nearest closest s closest u draw it Blend 16 nearest quadrilinear interpolation s u Slide by Rick Szeliski and Michael Cohen

Stanford multi-camera array 640 480 pixels 30 fps 128 cameras synchronized timing continuous streaming flexible arrangement

Light field photography using a handheld plenoptic camera Commercialized as Lytro Ren Ng, Marc Levoy, Mathieu Brédif, Gene Duval, Mark Horowitz and Pat Hanrahan

Conventional versus light field camera Marc Levoy

Marc Levoy Conventional versus light field camera uv-plane st-plane

Prototype camera Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses 4000 4000 pixels 292 292 lenses = 14 14 pixels per lens

Marc Levoy Digitally stopping-down Σ Σ stopping down = summing only the central portion of each microlens

Marc Levoy Digital refocusing Σ Σ refocusing = summing windows extracted from several microlenses

Example of digital refocusing Marc Levoy

Marc Levoy Digitally moving the observer Σ moving the observer = moving the window we extract from the microlenses Σ

Example of moving the observer Marc Levoy

P(x,t) by David Dewey

2D: Image What is an image? All rays through a point Panorama? Slide by Rick Szeliski and Michael Cohen

Image Image plane 2D position

Spherical Panorama See also: 2003 New Years Eve http://www.panoramas.dk/fullscreen3/f1.html All light rays through a point form a ponorama Totally captured in a 2D array -- P( ) Where is the geometry???

Other ways to sample Plenoptic Function Moving in time: Spatio-temporal volume: P(,t) Useful to study temporal changes Long an interest of artists: Claude Monet, Haystacks studies

Space-time images Other ways to slice the plenoptic function t y x

The Theatre Workshop Metaphor (Adelson & Pentland,1996) desired image Painter Lighting Designer Sheet-metal worker

Painter (images)

Lighting Designer (environment maps) Show Naimark SF MOMA video http://www.debevec.org/naimark/naimark-displacements.mov

Sheet-metal Worker (geometry) Let surface normals do all the work!

working together clever Italians Want to minimize cost Each one does what s easiest for him Geometry big things Images detail Lighting illumination effects