Computational Photography

Size: px
Start display at page:

Download "Computational Photography"

Transcription

1 Computational Photography Matthias Zwicker University of Bern Fall 2010

2 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application

3 Introduction Pinhole camera captures all light rays passing through one point

4 Introduction What if we could capture all light rays in a scene? Plenoptic function: information available to an observer at any point in space and time [Adelson, Bergen]

5 Plenoptic function Most general case: function of 7 variables Ray direction (2 degrees of freedom, dof ) Ray origin (3 dof) Wavelength (1 dof) Time 1 (dof) Value stored is radiance

6 Plenoptic function 2D image is a 2D slice of plenoptic function Perspective images: fix camera position, vary ray direction Other arrangements a e are possible Multi-viewpoint panoramas

7 Applications Panoramas Multi-viewpoint, spherical,... Image based rendering (free choice of viewpoint) Interactive walkthroughs Two panoramas from nearby viewpoints Two panoramas from nearby viewpoints

8 Applications Computing images using signal processing E.g., refocusing Depth extraction 2D slices of the plenoptic function Depth map

9 Applications Autostereoscopic 3D displays, no glasses Holografika (holographic projection screens) holografika Alioscopy (microlens arrays)

10 Capturing the plenoptic function Use many cameras at the same time /h h l Plenoptic camera

11 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application

12 Light fields Want to capture, process, display plenoptic function Need suitable representation to operate on Light fields based on two-plane parameterization

13 Two-plane parameterization Assumption: record light rays only outside convex hull of a scene Need only 2 dof for origin of light ray, since radiance along ray does not change Ray geometry has 4 dof instead of 5 dof Light fields are 4D functions Not recorded Convex hull Scene geometry Recorded ray

14 Two-plane parameterization Ray geometry represented by intersection location with two parallel planes Can interpret as 2 dof for origin, 2 dof for direction Each ray stores RGB radiance Levoy, Hanrahan, Light field rendering

15 Two-plane parameterization Uniform sampling: usually cameras on uniform grid on (u,v) plane Scene behind (s,t), plane Levoy, Hanrahan, Light field rendering

16 Acquiring light fields Sample directly on uniform grid v Each camera corresponds to one sample point on uv-plane Otherwise, resampling necessary u

17 Two-plane parameterization Uniform sampling on (u,v) and (s,t) plane Camera plane Camera plane Levoy, Hanrahan, Light field rendering

18 Two-plane parameterization Spatial resolution Resolution of each image Number of samples on (s,t) plane, usually a few million Camera plane Angular resolution Number of cameras, ( nr. of directions each scene point is seen from ) Number of samples on (u,v) plane, usually a few dozens to hundreds Camera plane

19 Light field rendering Given uniformly sampled light field Place camera arbitrarlily relative to parameterization planes Desired image consists of a set of rays through pixel centers Rendering requires interpolation In 4D, each ray given by 4 parameters Desired rays Levoy, Hanrahan, Light field rendering

20 Interpolation (2D illustration) st st st uv uv uv Virtual camera ray Virtual camera ray Virtual camera ray Nearest neighbor uv bilinear uvst quadrilinear

21 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application

22 Signal processing analysis Want to understand sampling, reconstruction, aliasing of light fields similar as for 2D images Benefits Antialiasing i algorithms High quality filters Efficient algorithms Chai et al Plenoptic sampling Chai et al., Plenoptic sampling

23 Ray space (2D illustration) A ray is a point in ray space Origin of v relative to t Ray 0 0 Two plane parameterization Ray space Caution: different arrangements occur in literature!

24 Ray space All rays through a point form a line in ray space Slope inversely related to depth Two plane parameterization Ray space Point towards infinity: slope becomes horizontal Point on v-axis: slope is diagonal

25 Ray space Scene with constant depth forms a sheared structure in ray space Two plane parameterization Ray space

26 Ray space Visualize ray space of one scanline v Moving viewpoint Scanline Scene with several depth layers Ray space, epipolar plane image, EPI t

27 Fourier analysis Fourier transform Ray space Light field spectrum

28 Fourier analysis Scenes with bounded depth range v Zero power EPI of one scanline t Zero power Light field spectrum, bow tie

29 Fourier analysis v Scene with several depth layers Ray space t Fourier transform

30 Sampling and reconstruction Continuous reconstruction allows interpolation Rendering of new views Before: quadrilinear interpolation/reconstruction Can we do better? Desired rays st uv uvst quadrilinear reconstruction filter

31 Sampling and reconstruction Sampling leads to replication of spectra Reconstruction is multiplication with filter spectrum Overlap with non-central replicas is aliasing (Idealized) spectrum of reconstruction filter Reconstruction with aliasing Aliasing appears as double edges

32 Sampling and reconstruction Sheared reconstruction filter to match depth of scene allows lower sampling rate Higher sampling rate isotropic i reconstruction ti filter, aliasing Lower sampling rate, optimal reconstruction ti filter, no aliasing

33 Sheared reconstruction No aliasing at lower sampling rate

34 Sheared reconstruction Shearing reconstruction filter equivalent to shearing signal before filtering Visualization using 3D light field example Only horizontal camera movement 3D light field interpreted as image stack

35 Sheared reconstruction Input Reconstruction: filtering across images Shear determines which depth is aligned with filter

36 Filter visualization No shearing, high bandwidth in t and v Aliasing, as before

37 Filter visualization Sheared, high bandwidth in t and v No(or reduced) aliasing

38 Filter visualization Sheared, low bandwidth in t (angle) Large synthetic aperture

39 Synthetic aperture Small synthetic aperture Large synthetic aperture

40 Synthetic aperture The scene depth aligned with sheared filter is sharp, in focus Can choose focus arbitrarily by varying shear! Size of filter determines es depth of field Adjusting shear of large synthetic aperture filter to change focus

41 Filter visualization Sheared, low bandwidth in t and v Blurry images

42 Note Shearing the filter and shearing the signal is equivalent Sheared filter Sheared signal

43 Observation Often, light fields are undersampled, or aliased Limited it number of viewpoints i Sheared reconstruction may still be aliased! Aliasing, overlap of reconstruction filter with replicas Aliasing, double images

44 Reconstruction without aliasing Avoid aliasing by reducing spatial or angular bandwidth Low spatial bandwidth, blurry image Low angular bandwidth (wide aperture), shallow depth of field

45 Reconstruction without aliasing Low spatial bandwidth, blurry image Low angular bandwidth, shallow depth of field

46 Improved reconstruction Construct reconstruction filter that preserves as much of signal as possible Improved reconstruction for undersampled (aliased) light fields Improved reconstruction for undersampled (aliased) light fields

47 Improved reconstruction Blurring (low-pass filtering) images separately Low bandwidth in v Large aperture filter Low bandwidth in t Large aperture filtering and blurring * = Order doesn t matter

48 Improved reconstruction Frequency domain construction - * = = + =

49 Spatial domain implementation 1. Wide aperture filtering on original input images 2. Compute set of blurred input images 3. Wide aperture filter of blurred images 4. Subtract result of 3 from 1 5. Add result of 4 to 2 In practice filters implemented eg In practice, filters implemented e.g. using Gaussians

50 Improved reconstruction Blurry Large aperture Improved reconstruction ti reconstruction ti reconstruction ti

51 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application

52 Light field cameras Already described by Lipman, 1908 Using microlens array pdf Similar construction by Adelson & Wang, 1992 For depth reconstruction

53 Light field cameras Camera arrays

54 Light field cameras Robotic cameras Restricted to static scenes

55 Light field cameras Hand-held cameras Microlens array mounted on top of camera sensor Commercial product t / Camera with microlens array

56 Light field cameras Hand-held cameras Sensor pixels Scene point Each sensor pixel of a microlens observes es scene point from slightly different viewpoint Viewpoints distributed over aperture of main lens Viewpoints distributed over aperture of main lens Other arrangements possible

57 Light field cameras Sensor data Close-up

58 Light field cameras

59 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application

60 Digital refocusing Change focus of image after capture

61 Digital refocusing Change focus of image after capture

62 Digital refocusing As before: adjust shear of reconstruction filter Shear determines focusing distance But careful, different parameterization Now: ray tracing explanation

63 Digital refocusing

64 Digital refocusing

65 Digital refocusing

66 Digital refocusing

67 3D displays Displays that show light fields Use view dependent pixels Pixels show dff different color dependingd on viewing angle Autostereoscopic viewing: left and right eyes see different color at each pixel No glasses! Spatial resolution: how many pixels Angular resolution: how many direction per pixel View dependent pixels Left eye Right eye

68 Integral imaging First 3D autostereoscopc displays [Lippmann 1908] Ui Using lenticular l optics Eachlensisa viewdependent pixel Same two-plane parameterization to describe rays (u,v) (s,t) (u,v) (s,t)

69 Integral imaging Place image that encodes light field data behind lenticular sheet (lens array) Lens array Encoded light field without lenticular sheet Image from certain viewing postion

70 Parallax barriers Same concept as lenticulars, different implementation Two-plane parameterization to describe rays (s,t) (u,v)

71 Limitations & outlook Parallax barriers & lenticulars suffer from poor resolution Low angular resolution: only few (<10) directional (angular) samples per pixel Aliasing las problems, poble ghosting g Holographic screens Higher angular resolution Maybe this is the future? See

72 Summary Plenoptic function: theoretical concept that captures complete information of light distribution in a scene Light fields: practical representation of plenoptic function restricted to 4D Two plane parameterization Convenient to capture, process Signal processing analysis, bow tie spectrum Sheared reconstruction, synthetic aperture filters Light field cameras Camera arrays, hand-held camera Applications Rendering images from novel viewpoints After the fact processing for photography, refocusing Other processing, depth extraction 3D displays

73 Next time From 2D images to 3D geometry

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007 Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 The Plenoptic Function Figure by Leonard McMillan Q: What is the set of all things that we can ever see? A: The

More information

Modeling Light. Slides from Alexei A. Efros and others

Modeling Light. Slides from Alexei A. Efros and others Project 3 Results http://www.cs.brown.edu/courses/cs129/results/proj3/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj3/damoreno/ http://www.cs.brown.edu/courses/cs129/results/proj3/taox/ Stereo

More information

Image-Based Rendering

Image-Based Rendering Image-Based Rendering COS 526, Fall 2016 Thomas Funkhouser Acknowledgments: Dan Aliaga, Marc Levoy, Szymon Rusinkiewicz What is Image-Based Rendering? Definition 1: the use of photographic imagery to overcome

More information

Image-based modeling (IBM) and image-based rendering (IBR)

Image-based modeling (IBM) and image-based rendering (IBR) Image-based modeling (IBM) and image-based rendering (IBR) CS 248 - Introduction to Computer Graphics Autumn quarter, 2005 Slides for December 8 lecture The graphics pipeline modeling animation rendering

More information

Modeling Light. Michal Havlik

Modeling Light. Michal Havlik Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Spring 2010 What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of

More information

Plenoptic camera and its Applications

Plenoptic camera and its Applications Aum Sri Sairam Plenoptic camera and its Applications Agenda: 1. Introduction 2. Single lens stereo design 3. Plenoptic camera design 4. Depth estimation 5. Synthetic refocusing 6. Fourier slicing 7. References

More information

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011 Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2011 What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power

More information

The Focused Plenoptic Camera. Lightfield photographers, focus your cameras! Karl Marx

The Focused Plenoptic Camera. Lightfield photographers, focus your cameras! Karl Marx The Focused Plenoptic Camera Lightfield photographers, focus your cameras! Karl Marx Plenoptic Camera, Adelson 1992 Main lens focused on microlenses Plenoptic Camera, Adelson 1992 Microlenses focused on

More information

Jingyi Yu CISC 849. Department of Computer and Information Science

Jingyi Yu CISC 849. Department of Computer and Information Science Digital Photography and Videos Jingyi Yu CISC 849 Light Fields, Lumigraph, and Image-based Rendering Pinhole Camera A camera captures a set of rays A pinhole camera captures a set of rays passing through

More information

Modeling Light. Michal Havlik

Modeling Light. Michal Havlik Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power

More information

Modeling Light. On Simulating the Visual Experience

Modeling Light. On Simulating the Visual Experience Modeling Light 15-463: Rendering and Image Processing Alexei Efros On Simulating the Visual Experience Just feed the eyes the right data No one will know the difference! Philosophy: Ancient question: Does

More information

Focal stacks and lightfields

Focal stacks and lightfields Focal stacks and lightfields http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 11 Course announcements Homework 3 is out. - Due October 12 th.

More information

More and More on Light Fields. Last Lecture

More and More on Light Fields. Last Lecture More and More on Light Fields Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 4 Last Lecture Re-review with emphasis on radiometry Mosaics & Quicktime VR The Plenoptic function The main

More information

Image or Object? Is this real?

Image or Object? Is this real? Image or Object? Michael F. Cohen Microsoft Is this real? Photo by Patrick Jennings (patrick@synaptic.bc.ca), Copyright 1995, 96, 97 Whistler B. C. Canada Modeling, Rendering, and Lighting 1 A mental model?

More information

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen Light Fields Light Fields By Levoy and Hanrahan, SIGGRAPH 96 Representation for sampled plenoptic function stores data about visible light at various positions and directions Created from set of images

More information

Light Field Spring

Light Field Spring Light Field 2015 Spring Recall: Light is Electromagnetic radiation (EMR) moving along rays in space R(l) is EMR, measured in units of power (watts) l is wavelength Useful things: Light travels in straight

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Radiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University

Radiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University Radiance Photography Todor Georgiev Adobe Systems Andrew Lumsdaine Indiana University Course Goals Overview of radiance (aka lightfield) photography Mathematical treatment of theory and computation Hands

More information

AS the most important medium for people to perceive

AS the most important medium for people to perceive JOURNAL OF L A T E X CLASS FILES, VOL. XX, NO. X, OCTOBER 2017 1 Light Field Image Processing: An Overview Gaochang Wu, Belen Masia, Adrian Jarabo, Yuchen Zhang, Liangyong Wang, Qionghai Dai, Senior Member,

More information

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract

More information

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN WHAT IS A LIGHT FIELD? Light field seems to have turned into a catch-all term for many advanced camera/display technologies. WHAT IS A LIGHT FIELD?

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Image-Based Modeling and Rendering

Image-Based Modeling and Rendering Traditional Computer Graphics Image-Based Modeling and Rendering Thomas Funkhouser Princeton University COS 426 Guest Lecture Spring 2003 How would you model and render this scene? (Jensen) How about this

More information

Multi-View Omni-Directional Imaging

Multi-View Omni-Directional Imaging Multi-View Omni-Directional Imaging Tuesday, December 19, 2000 Moshe Ben-Ezra, Shmuel Peleg Abstract This paper describes a novel camera design or the creation o multiple panoramic images, such that each

More information

Distribution Ray-Tracing. Programação 3D Simulação e Jogos

Distribution Ray-Tracing. Programação 3D Simulação e Jogos Distribution Ray-Tracing Programação 3D Simulação e Jogos Bibliography K. Suffern; Ray Tracing from the Ground Up, http://www.raytracegroundup.com Chapter 4, 5 for Anti-Aliasing Chapter 6 for Disc Sampling

More information

Computational Photography

Computational Photography Computational Photography Photography and Imaging Michael S. Brown Brown - 1 Part 1 Overview Photography Preliminaries Traditional Film Imaging (Camera) Part 2 General Imaging 5D Plenoptic Function (McMillan)

More information

A Frequency Analysis of Light Transport

A Frequency Analysis of Light Transport A Frequency Analysis of Light Transport Frédo Durand MIT CSAIL With Nicolas Holzschuch, Cyril Soler, Eric Chan & Francois Sillion Artis Gravir/Imag-Inria & MIT CSAIL Our research 3D rendering Light transport

More information

AN O(N 2 LOG(N)) PER PLANE FAST DISCRETE FOCAL STACK TRANSFORM

AN O(N 2 LOG(N)) PER PLANE FAST DISCRETE FOCAL STACK TRANSFORM AN O(N 2 LOG(N)) PER PLANE FAST DISCRETE FOCAL STACK TRANSFORM Fernando Pérez Nava +, Jonás Philipp Lüke + Departamento de Estadística, Investigación Operativa y Computación Departamento de Física Fundamental

More information

The Light Field and Image-Based Rendering

The Light Field and Image-Based Rendering Lecture 11: The Light Field and Image-Based Rendering Visual Computing Systems Demo (movie) Royal Palace: Madrid, Spain Image-based rendering (IBR) So far in course: rendering = synthesizing an image from

More information

Computational Methods for Radiance. Render the full variety offered by the direct observation of objects. (Computationally).

Computational Methods for Radiance. Render the full variety offered by the direct observation of objects. (Computationally). Computational Methods for Radiance Render the full variety offered by the direct observation of objects. (Computationally). Methods for Plenoptic 1.0 Computing with Radiance Goal: Render the full variety

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography Mitsubishi Electric Research Labs (MERL) Computational Cameras Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography Amit Agrawal Mitsubishi Electric Research Labs (MERL)

More information

Computational Photography: Real Time Plenoptic Rendering

Computational Photography: Real Time Plenoptic Rendering Computational Photography: Real Time Plenoptic Rendering Andrew Lumsdaine, Georgi Chunev Indiana University Todor Georgiev Adobe Systems Who was at the Keynote Yesterday? 2 Overview Plenoptic cameras Rendering

More information

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 15: Image-Based Rendering and the Light Field Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Demo (movie) Royal Palace: Madrid, Spain Image-based rendering (IBR) So

More information

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction Computer Graphics -based rendering Output Michael F. Cohen Microsoft Research Synthetic Camera Model Computer Vision Combined Output Output Model Real Scene Synthetic Camera Model Real Cameras Real Scene

More information

Dense Lightfield Disparity Estimation using Total Variation Regularization

Dense Lightfield Disparity Estimation using Total Variation Regularization Dense Lightfield Disparity Estimation using Total Variation Regularization Nuno Barroso Monteiro 1,2, João Pedro Barreto 2, and José Gaspar 1 1 Institute for Systems and Robotics, Univ. of Lisbon, Portugal

More information

Plenoptic Cameras. Bastian Goldlücke, Oliver Klehm, Sven Wanner, and Elmar Eisemann. 5.1 Introduction

Plenoptic Cameras. Bastian Goldlücke, Oliver Klehm, Sven Wanner, and Elmar Eisemann. 5.1 Introduction Plenoptic Cameras Bastian Goldlücke, Oliver Klehm, Sven Wanner, and Elmar Eisemann 5.1 Introduction The light field, as defined by Gershun in 1936 [Gershun 36] describes the radiance traveling in every

More information

Mosaics. Today s Readings

Mosaics. Today s Readings Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

Announcements. Mosaics. Image Mosaics. How to do it? Basic Procedure Take a sequence of images from the same position =

Announcements. Mosaics. Image Mosaics. How to do it? Basic Procedure Take a sequence of images from the same position = Announcements Project 2 out today panorama signup help session at end of class Today mosaic recap blending Mosaics Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html

More information

L16. Scan Matching and Image Formation

L16. Scan Matching and Image Formation EECS568 Mobile Robotics: Methods and Principles Prof. Edwin Olson L16. Scan Matching and Image Formation Scan Matching Before After 2 Scan Matching Before After 2 Map matching has to be fast 14 robots

More information

GPU assisted light field capture and processing

GPU assisted light field capture and processing GPU assisted light field capture and processing Attila Barsi Holografika GPU Day, Budapest, 22. 06. 2017. What is this all about? Ninjas! Motivation: ninjas are cool, and can do lots of tricks that we

More information

Coding and Modulation in Cameras

Coding and Modulation in Cameras Mitsubishi Electric Research Laboratories Raskar 2007 Coding and Modulation in Cameras Ramesh Raskar with Ashok Veeraraghavan, Amit Agrawal, Jack Tumblin, Ankit Mohan Mitsubishi Electric Research Labs

More information

Image-Based Modeling and Rendering

Image-Based Modeling and Rendering Image-Based Modeling and Rendering Richard Szeliski Microsoft Research IPAM Graduate Summer School: Computer Vision July 26, 2013 How far have we come? Light Fields / Lumigraph - 1996 Richard Szeliski

More information

Computer Graphics. Texture Filtering & Sampling Theory. Hendrik Lensch. Computer Graphics WS07/08 Texturing

Computer Graphics. Texture Filtering & Sampling Theory. Hendrik Lensch. Computer Graphics WS07/08 Texturing Computer Graphics Texture Filtering & Sampling Theory Hendrik Lensch Overview Last time Texture Parameterization Procedural Shading Today Texturing Filtering 2D Texture Mapping Forward mapping Object surface

More information

Optimization of the number of rays in interpolation for light field based free viewpoint systems

Optimization of the number of rays in interpolation for light field based free viewpoint systems University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part A Faculty of Engineering and Information Sciences 2015 Optimization of the number of rays in for

More information

Title: The Future of Photography is Computational Photography. Subtitle: 100 years after invention, Integral Photography becomes feasible

Title: The Future of Photography is Computational Photography. Subtitle: 100 years after invention, Integral Photography becomes feasible Title: The Future of Photography is Computational Photography Subtitle: 100 years after invention, Integral Photography becomes feasible Adobe customers are creative and demanding. They expect to use our

More information

TREE-STRUCTURED ALGORITHM FOR EFFICIENT SHEARLET-DOMAIN LIGHT FIELD RECONSTRUCTION. Suren Vagharshakyan, Robert Bregovic, Atanas Gotchev

TREE-STRUCTURED ALGORITHM FOR EFFICIENT SHEARLET-DOMAIN LIGHT FIELD RECONSTRUCTION. Suren Vagharshakyan, Robert Bregovic, Atanas Gotchev TREE-STRUCTURED ALGORITHM FOR EFFICIENT SHEARLET-DOMAIN LIGHT FIELD RECONSTRUCTION Suren Vagharshakyan, Robert Bregovic, Atanas Gotchev Department of Signal Processing, Tampere University of Technology,

More information

Linearizing the Plenoptic Space

Linearizing the Plenoptic Space Linearizing the Plenoptic Space Grégoire Nieto1, Frédéric Devernay1, James Crowley2 LJK, Université Grenoble Alpes, France 2 LIG, Université Grenoble Alpes, France 1 1 Goal: synthesize a new view Capture/sample

More information

An Approach on Hardware Design for Computationally Intensive Image Processing Applications based on Light Field Refocusing Algorithm

An Approach on Hardware Design for Computationally Intensive Image Processing Applications based on Light Field Refocusing Algorithm An Approach on Hardware Design for Computationally Intensive Image Processing Applications based on Light Field Refocusing Algorithm Jiayuan Meng, Dee A.B. Weikle, Greg Humphreys, Kevin Skadron Dept. of

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

Theoretically Perfect Sensor

Theoretically Perfect Sensor Sampling 1/60 Sampling The ray tracer samples the geometry, only gathering information from the parts of the world that interact with a finite number of rays In contrast, a scanline renderer can push all

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models /2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:

More information

EECS 487: Interactive Computer Graphics

EECS 487: Interactive Computer Graphics Ray Tracing EECS 487: Interactive Computer Graphics Lecture 29: Distributed Ray Tracing Introduction and context ray casting Recursive ray tracing shadows reflection refraction Ray tracing implementation

More information

IRW: An Incremental Representation for Image-Based Walkthroughs

IRW: An Incremental Representation for Image-Based Walkthroughs IRW: An Incremental Representation for Image-Based Walkthroughs David Gotz gotz@cs.unc.edu Ketan Mayer-Patel kmp@cs.unc.edu University of North Carolina at Chapel Hill CB #3175, Sitterson Hall Chapel Hill,

More information

distribution ray-tracing

distribution ray-tracing distribution ray-tracing 1 distribution ray-tracing use many rays to compute average values over pixel areas, time, area lights, reflected directions,... 2 antialiasing origin compute average color subtended

More information

http://www.diva-portal.org This is the published version of a paper presented at 2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm Helsinki Stockholm,

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

A Warping-based Refinement of Lumigraphs

A Warping-based Refinement of Lumigraphs A Warping-based Refinement of Lumigraphs Wolfgang Heidrich, Hartmut Schirmacher, Hendrik Kück, Hans-Peter Seidel Computer Graphics Group University of Erlangen heidrich,schirmacher,hkkueck,seidel@immd9.informatik.uni-erlangen.de

More information

Light Field Techniques for Reflections and Refractions

Light Field Techniques for Reflections and Refractions Light Field Techniques for Reflections and Refractions Wolfgang Heidrich, Hendrik Lensch, Michael F. Cohen, Hans-Peter Seidel Max-Planck-Institute for Computer Science {heidrich,lensch,seidel}@mpi-sb.mpg.de

More information

Advanced Computer Graphics Transformations. Matthias Teschner

Advanced Computer Graphics Transformations. Matthias Teschner Advanced Computer Graphics Transformations Matthias Teschner Motivation Transformations are used To convert between arbitrary spaces, e.g. world space and other spaces, such as object space, camera space

More information

Announcements. Mosaics. How to do it? Image Mosaics

Announcements. Mosaics. How to do it? Image Mosaics Announcements Mosaics Project artifact voting Project 2 out today (help session at end of class) http://www.destination36.com/start.htm http://www.vrseattle.com/html/vrview.php?cat_id=&vrs_id=vrs38 Today

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Multi-Camera Plenoptic Particle Image Velocimetry

Multi-Camera Plenoptic Particle Image Velocimetry 1TH INTERNATIONAL SYMPOSIUM ON PARTICLE IMAGE VELOCIMETRY PIV13 Delft The Netherlands July 24 213 MultiCamera Plenoptic Particle Image Velocimetry Roderick R. La Foy 1 and Pavlos Vlachos 2 1 Department

More information

CS201 Computer Vision Lect 4 - Image Formation

CS201 Computer Vision Lect 4 - Image Formation CS201 Computer Vision Lect 4 - Image Formation John Magee 9 September, 2014 Slides courtesy of Diane H. Theriault Question of the Day: Why is Computer Vision hard? Something to think about from our view

More information

L1a: Introduction to Light Fields

L1a: Introduction to Light Fields L1a: Introduction to Light Fields 2018 IEEE SPS Summer School on Light Field Data Representation, Interpretation, and Compression Donald G. Dansereau, May 2018 Schedule 2 Outline Lecture 1a: Introduction

More information

A unified approach for motion analysis and view synthesis Λ

A unified approach for motion analysis and view synthesis Λ A unified approach for motion analysis and view synthesis Λ Alex Rav-Acha Shmuel Peleg School of Computer Science and Engineering The Hebrew University of Jerusalem 994 Jerusalem, Israel Email: falexis,pelegg@cs.huji.ac.il

More information

Volumetric Scene Reconstruction from Multiple Views

Volumetric Scene Reconstruction from Multiple Views Volumetric Scene Reconstruction from Multiple Views Chuck Dyer University of Wisconsin dyer@cs cs.wisc.edu www.cs cs.wisc.edu/~dyer Image-Based Scene Reconstruction Goal Automatic construction of photo-realistic

More information

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois Pinhole Camera Model /5/7 Computational Photography Derek Hoiem, University of Illinois Next classes: Single-view Geometry How tall is this woman? How high is the camera? What is the camera rotation? What

More information

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian CS 563 Advanced Topics in Computer Graphics Camera Models by Kevin Kardian Introduction Pinhole camera is insufficient Everything in perfect focus Less realistic Different camera models are possible Create

More information

Light Field Super Resolution with Convolutional Neural Networks

Light Field Super Resolution with Convolutional Neural Networks Light Field Super Resolution with Convolutional Neural Networks by Andrew Hou A Thesis submitted in partial fulfillment of the requirements for Honors in the Department of Applied Mathematics and Computer

More information

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects Basic Ray Tracing Rendering: Reality Eye acts as pinhole camera Photons from light hit objects Rendering: Reality Eye acts as pinhole camera Photons from light hit objects Rendering: Reality Eye acts as

More information

Sampling and Reconstruction

Sampling and Reconstruction Sampling and Reconstruction Sampling and Reconstruction Sampling and Spatial Resolution Spatial Aliasing Problem: Spatial aliasing is insufficient sampling of data along the space axis, which occurs because

More information

Projective Geometry and Camera Models

Projective Geometry and Camera Models Projective Geometry and Camera Models Computer Vision CS 43 Brown James Hays Slides from Derek Hoiem, Alexei Efros, Steve Seitz, and David Forsyth Administrative Stuff My Office hours, CIT 375 Monday and

More information

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016 Computergrafik Matthias Zwicker Universität Bern Herbst 2016 2 Today Basic shader for texture mapping Texture coordinate assignment Antialiasing Fancy textures 3 Texture mapping Glue textures (images)

More information

CSCI 1290: Comp Photo

CSCI 1290: Comp Photo CSCI 1290: Comp Photo Fall 2018 @ Brown University James Tompkin Many slides thanks to James Hays old CS 129 course, along with all of its acknowledgements. What do we see? 3D world 2D image Point of observation

More information

Ray Tracing. CS334 Fall Daniel G. Aliaga Department of Computer Science Purdue University

Ray Tracing. CS334 Fall Daniel G. Aliaga Department of Computer Science Purdue University Ray Tracing CS334 Fall 2013 Daniel G. Aliaga Department of Computer Science Purdue University Ray Casting and Ray Tracing Ray Casting Arthur Appel, started around 1968 Ray Tracing Turner Whitted, started

More information

High-Quality Interactive Lumigraph Rendering Through Warping

High-Quality Interactive Lumigraph Rendering Through Warping High-Quality Interactive Lumigraph Rendering Through Warping Hartmut Schirmacher, Wolfgang Heidrich, and Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany http://www.mpi-sb.mpg.de

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

distribution ray tracing

distribution ray tracing distribution ray tracing computer graphics distribution ray tracing 2006 fabio pellacini 1 distribution ray tracing use many rays to compute average values over pixel areas, time, area lights, reflected

More information

Computer Vision I - Image Matching and Image Formation

Computer Vision I - Image Matching and Image Formation Computer Vision I - Image Matching and Image Formation Carsten Rother 10/12/2014 Computer Vision I: Image Formation Process Computer Vision I: Image Formation Process 10/12/2014 2 Roadmap for next five

More information

Theoretically Perfect Sensor

Theoretically Perfect Sensor Sampling 1/67 Sampling The ray tracer samples the geometry, only gathering information from the parts of the world that interact with a finite number of rays In contrast, a scanline renderer can push all

More information

3D Surface Reconstruction Based On Plenoptic Image. Haiqiao Zhang

3D Surface Reconstruction Based On Plenoptic Image. Haiqiao Zhang 3D Surface Reconstruction Based On Plenoptic Image by Haiqiao Zhang A thesis submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements for the Degree of Master of

More information

Depth Estimation of Semi-submerged Objects Using a Light Field Camera. Juehui Fan

Depth Estimation of Semi-submerged Objects Using a Light Field Camera. Juehui Fan Depth Estimation of Semi-submerged Objects Using a Light Field Camera by Juehui Fan A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science Department of Computing

More information

Programming projects. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer

Programming projects. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer Programming projects Rendering Algorithms Spring 2010 Matthias Zwicker Universität Bern Description of assignments on class webpage Use programming language and environment of your choice We recommend

More information

3D Volume Reconstruction From 2D Plenoptic Data Using FFT-Based Methods. John Paul Anglin

3D Volume Reconstruction From 2D Plenoptic Data Using FFT-Based Methods. John Paul Anglin 3D Volume Reconstruction From 2D Plenoptic Data Using FFT-Based Methods by John Paul Anglin A dissertation submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements

More information

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003 Image Transfer Methods Satya Prakash Mallick Jan 28 th, 2003 Objective Given two or more images of the same scene, the objective is to synthesize a novel view of the scene from a view point where there

More information

WATERMARKING FOR LIGHT FIELD RENDERING 1

WATERMARKING FOR LIGHT FIELD RENDERING 1 ATERMARKING FOR LIGHT FIELD RENDERING 1 Alper Koz, Cevahir Çığla and A. Aydın Alatan Department of Electrical and Electronics Engineering, METU Balgat, 06531, Ankara, TURKEY. e-mail: koz@metu.edu.tr, cevahir@eee.metu.edu.tr,

More information

Today. Texture mapping in OpenGL. Texture mapping. Basic shaders for texturing. Today. Computergrafik

Today. Texture mapping in OpenGL. Texture mapping. Basic shaders for texturing. Today. Computergrafik Computergrafik Today Basic shader for texture mapping Texture coordinate assignment Antialiasing Fancy textures Matthias Zwicker Universität Bern Herbst 2009 Texture mapping Glue textures (images) onto

More information

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Light Field Rendering

Light Field Rendering Light Field Rendering Marc Levo y and Pat Hanrahan Computer Science Department Stanford University Abstract A number of techniques have been proposed for flying through scenes by redisplaying previously

More information

Realistic Camera Model

Realistic Camera Model Realistic Camera Model Shan-Yung Yang November 2, 2006 Shan-Yung Yang () Realistic Camera Model November 2, 2006 1 / 25 Outline Introduction Lens system Thick lens approximation Radiometry Sampling Assignment

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Computer Graphics. Lecture 9 Environment mapping, Mirroring

Computer Graphics. Lecture 9 Environment mapping, Mirroring Computer Graphics Lecture 9 Environment mapping, Mirroring Today Environment Mapping Introduction Cubic mapping Sphere mapping refractive mapping Mirroring Introduction reflection first stencil buffer

More information

Computer Vision for Computer Graphics

Computer Vision for Computer Graphics Computer Vision for Computer Graphics Mark Borg Computer Vision & Computer Graphics I Computer Vision Understanding the content of an image (normaly by creating a model of the observed scene) Computer

More information

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Hybrid Rendering for Collaborative, Immersive Virtual Environments Hybrid Rendering for Collaborative, Immersive Virtual Environments Stephan Würmlin wuermlin@inf.ethz.ch Outline! Rendering techniques GBR, IBR and HR! From images to models! Novel view generation! Putting

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University Ray Tracing III Wen-Chieh (Steve) Lin National Chiao-Tung University Shirley, Fundamentals of Computer Graphics, Chap 10 Doug James CG slides, I-Chen Lin s CG slides Ray-tracing Review For each pixel,

More information

Multi-View Stereo for Static and Dynamic Scenes

Multi-View Stereo for Static and Dynamic Scenes Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.

More information