Jingyi Yu CISC 849. Department of Computer and Information Science

Similar documents
Image-Based Rendering

More and More on Light Fields. Last Lecture

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Computational Photography

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

CSM Scrolling. An acceleration technique for the rendering of cascaded shadow maps

Image Based Rendering

Image or Object? Is this real?

Image-Based Modeling and Rendering

5LSH0 Advanced Topics Video & Analysis

The Light Field and Image-Based Rendering

Image-based modeling (IBM) and image-based rendering (IBR)

Structure from Motion and Multi- view Geometry. Last lecture

Volumetric Scene Reconstruction from Multiple Views

Modeling Light. On Simulating the Visual Experience

Il colore: acquisizione e visualizzazione. Lezione 17: 11 Maggio 2012

Capturing and View-Dependent Rendering of Billboard Models

Efficient Depth-Compensated Interpolation for Full Parallax Displays

CS 354R: Computer Game Technology

Lecture 6: Texturing Part II: Texture Compression and GPU Latency Hiding Mechanisms. Visual Computing Systems CMU , Fall 2014

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz

Shadow Techniques. Sim Dietrich NVIDIA Corporation

Lecture 14, Video Coding Stereo Video Coding

Volume Rendering. Lecture 21

Image-Based Modeling and Rendering

Image Based Rendering. D.A. Forsyth, with slides from John Hart

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

#Short presentation of the guys

Spatial Data Structures for Computer Graphics

Il colore: acquisizione e visualizzazione. Lezione 20: 11 Maggio 2011

Chapter 11 Global Illumination. Part 1 Ray Tracing. Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11.

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

CS451Real-time Rendering Pipeline

Render-To-Texture Caching. D. Sim Dietrich Jr.

Online Interactive 4D Character Animation

Modeling Light. Slides from Alexei A. Efros and others

Surface Rendering. Surface Rendering

CHAPTER 1 Graphics Systems and Models 3

CS 498 VR. Lecture 19-4/9/18. go.illinois.edu/vrlect19

Modeling Light. Michal Havlik

An Approach on Hardware Design for Computationally Intensive Image Processing Applications based on Light Field Refocusing Algorithm

CSE528 Computer Graphics: Theory, Algorithms, and Applications

Today. Texture mapping in OpenGL. Texture mapping. Basic shaders for texturing. Today. Computergrafik

More Texture Mapping. Texture Mapping 1/46

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Morphological: Sub-pixel Morhpological Anti-Aliasing [Jimenez 11] Fast AproXimatte Anti Aliasing [Lottes 09]

DEPTH, STEREO AND FOCUS WITH LIGHT-FIELD CAMERAS

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Lecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection

Shape from Silhouettes II

Depth Estimation for View Synthesis in Multiview Video Coding

Ray Tracing. Computer Graphics CMU /15-662, Fall 2016

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Light Field Spring

Image-Based Rendering using Image-Warping Motivation and Background

Computational Photography

Sung-Eui Yoon ( 윤성의 )

Optimizing an Inverse Warper

Topics and things to know about them:

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping

Hello, Thanks for the introduction

Scam Light Field Rendering

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

GPU assisted light field capture and processing

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian

Multi-View Stereo for Static and Dynamic Scenes

CS 498 VR. Lecture 18-4/4/18. go.illinois.edu/vrlect18

CSL 859: Advanced Computer Graphics. Dept of Computer Sc. & Engg. IIT Delhi

3D Rasterization II COS 426

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development

Multiple View Geometry

Image warping introduction

Image Base Rendering: An Introduction

Announcements. Mosaics. How to do it? Image Mosaics

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

Computational Methods for Radiance. Render the full variety offered by the direct observation of objects. (Computationally).

Computer Graphics. Texture Filtering & Sampling Theory. Hendrik Lensch. Computer Graphics WS07/08 Texturing

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

The Traditional Graphics Pipeline

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

3D Object Model Acquisition from Silhouettes

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Algorithms for Image-Based Rendering with an Application to Driving Simulation

Models and Architectures

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized

Scalable multi-gpu cloud raytracing with OpenGL

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON

Visible-Surface Detection Methods. Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin

Video Communication Ecosystems. Research Challenges for Immersive. over Future Internet. Converged Networks & Services (CONES) Research Group

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Transcription:

Digital Photography and Videos Jingyi Yu CISC 849

Light Fields, Lumigraph, and Image-based Rendering

Pinhole Camera A camera captures a set of rays A pinhole camera captures a set of rays passing through a common 3D point Image Plane Ray Origin -o

Camera Array and Light Fields Digital it cameras are cheap Low image quality No sophisticated aperture (depth of view) effects Build up an array of cameras It captures an array of images It captures an array of set of rays

If we want to render a new image We can query each new ray into the light field ray database Interpolate each ray (we will see how) No 3D geometry is needed Why is it useful?

Key Problem How to represent a ray? What coordinate system to use? How to represent a line in 3D space Two points on the line (3D + 3D = 6D) Problem? A point and a direction (3D + 2D = 5D) Problem? Any better parameterization?

Two Plane Parameterization Each ray is parameterized by its two intersection points with two fixed planes. For simplicity, assume the two planes are z = 0 (st plane) and z = 1 (uv plane) Alternatively, we can view (s, t) as the camera index and (u, v) as the pixel index

Ray Representation For the moment let s consider just a 2D slice of this 4D ray space Rendering new pictures = Interpolating rays How to represent rays? 6D? 5D? 4D?

u v s t 1 1 1 1 u v 2 2 s t u,, v t Light Field Rendering For each desired ray: s 2 2 Compute intersection with (u,v) and (s,t) plane Blend closest ray What does closest mean? s t (u (u, v) (u, v) 1, v 1 ) (u 2, v 2 ) (s 1, t 1 ) (s, t) (s 2, t (s, t) 2 )

Light Field Rendering Linear Interpolation x x x 2 y y 2 x 1 1 ( 1 ) y 1 y 1 y? x 1 x x 2 y 2 What happens to higher dimension? Bilinear, tri-linear, quadra-linear u v s t 1 1 1 1 u v 2 2 s t u v,, s t 2 2 s t (u (u, v) (u, v) 1, v 1 ) (u 2, v 2 ) (s 1, t 1 ) (s, t) (s 2, t (s, t) 2 )

Quadralinear Interpolation Serious aliasing artifacts

Ray Structure All the rays in a light field passing through a 3D geometry point v v 3 v 2 v 1 v (v 1, t 1 ) (v 2, t 2 ) (v3, t t 3 3) t 1 t 2 t 3 t 2D Light Field 2D EPI

Why Aliasing? We study aliasing in spatial domain Next class, we will study frequency domain (take a quick review of Fourier transformation if you can)

Better Reconstruction Method Assume some geometric proxy Dynamically Reparametrized Light Field

Focal Surface Parameterization

Focal Surface Parameterization -2 Intersect each new ray with the focal plane Back trace the ray into the data camera Interpolate the corresponding rays But need to do ray- plane intersection Can we avoid that?

Using Focal Plane (u 1, v 1 ) (u, v) (u 2, v 2 ) Image Plane (s 1,t 1 ) (s, t) (s 2, Camera Plane t 2 ) Relative parameterization is hardware friendly [s, t ] ] corresponds to a particular texture [u, v ] corresponds to the texture coordinate Focal plane can be encoded as the disparity across the light field

Results Quadralinear Focal plane at the monitor First part in Assignment 1

Aperture Filtering Simple and easy to implement Cause aliasing (as we will see later in the frequency analysis) Can we blend more than 4 neighboring rays? 8? 16? 32? Or more? Aperture filtering

Variable Aperture

Changing the shape and size of aperture filter

Filtering-2

Rendering with memory coherent Ray Tracing

Small aperture size

Large aperture size

Using very large size aperture

Variable focus

Close focal surface

Distant focal surface

Issues with Light Field It only captures rays within a region (volume) It is huge (and, thus, needs compression) If implemented using software, a lot of memory fetches Solution 1: multitexturing Solution 2: using graphics hardware

Light Field Coverage

Multi-Slab Light Fields

Light Field Slabs Solution: Extending the light fields! Naïve approach misses rays.

Light Field Slabs 45

Hardware-based Implementation LF is an array of images Can be very efficiently stored as textures 32x32x128x128 DXT1 compression Each LF is stored in a 1024x1024x16 volume 6 slabs stored in a 1024x1024x128 128 volume => 64 MB Avoid branching by calculating l Light Field slices in volume Choose the largest direction i component for each ray Avoid unnecessary texture fetch

GPU-based Implementation Two pass PS2.0 algorithm First pass: render geometry, calculate and write (s, t, u, v) coordinates into the back buffer Second pass: Use (s, t, u, v) to query the light fields from the volume texture Four bilinear texture fetches and then interpolates results in shader Can adopt current technology

Light Field Compression Based on vector quantization Preprocessing: build a representative codebook of 4D tiles Each tile in lightfield represented by index Example: 2x2x2x2 tiles, 16 bit index = 24:1 compression

Lightfield Compression by Advantages: Vector Quantization i Fast at runtime Reasonable compression Disadvantages: Preprocessing slow Manually-selected codebook size Does not take advantage of structure

3D Imaging System The light field captures all (or almost all) necessary rays If we project the ray onto some image plane, we can do 3D imaging Autostereoscopic display 3D TV

Autostereoscopic Light Field

3D TV 16 camera Horizontal Array Auto Auto-Stereoscopic Display Efficient coding, transmission

Real-time Rendering

The Lumigraph Capture: move camera by hand Camera intrinsics assumed calibrated Camera pose recovered from markers

Lumigraph Postprocessing Obtain rough geometric model Chrome keying (blue screen) to extract silhouettes Octree-based space carving Resample images to two-plane parameterization

Lumigraph Rendering Use rough depth information to improve rendering quality

Lumigraph Rendering Use rough depth information to improve rendering quality

Lumigraph Rendering Without using geometry Using approximate geometry

Unstructured Lumigraph Rendering Further enhancement of lumigraphs: do not use two-plane parameterization Store original i pictures: no resampling Hand-held camera, moved around an environment

Unstructured Lumigraph Rendering To reconstruct views, assign penalty to each original ray Distance to desired ray, using approximate geometry Resolution Feather near edges of image Construct camera blending field Render using texture mapping

Unstructured Lumigraph Rendering Blending field Rendering

Other Lightfield Acquisition Spherical motion of camera around an object Samples space of directions uniformly Second arm to move light source measure BRDF Devices

Other Lightfield Acquisition Acquire an entire light field at once Video rates Integrated MPEG2 compression for each camera Devices (Bennett Wilburn, Michal Smulski, Mark Horowitz)

Assignment 1 Implement a LF renderer Read in an array of images (I will provide the link to the images on my computer) Store the images as an array of array of pixels (of RGB) A user interface to control the camera motion (forward, backward, left, right) Implement the quadralinear interpolation Have another control of focal plane (disparity) Have another control of aperture Due March 12