Coding and Modulation in Cameras

Similar documents
Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

emerging-technology Home > News > emerging-technology > Mitsubishi Electric Develops Deblurring Flutter Shutter Camera

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Radiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University

The Focused Plenoptic Camera. Lightfield photographers, focus your cameras! Karl Marx

Focal stacks and lightfields

A Theory of Plenoptic Multiplexing

Computational Photography: Real Time Plenoptic Rendering

Computational Methods for Radiance. Render the full variety offered by the direct observation of objects. (Computationally).

Modeling Light. Slides from Alexei A. Efros and others

Computational Photography

Image restoration. Restoration: Enhancement:

Plenoptic camera and its Applications

Modeling Light. Michal Havlik

Simple Spatial Domain Filtering

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

3D Volume Reconstruction From 2D Plenoptic Data Using FFT-Based Methods. John Paul Anglin

3D Surface Reconstruction Based On Plenoptic Image. Haiqiao Zhang

IMPORTANT INSTRUCTIONS

Capturing light. Source: A. Efros

Overview of Active Vision Techniques

Next Generation CAT System

Modeling Light. Michal Havlik

CS4670: Computer Vision

Central Slice Theorem

Analyzing Depth from Coded Aperture Sets

Computational Photography

Computer Vision. Fourier Transform. 20 January Copyright by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved

Tutorial: Compressive Sensing of Video

Seeing Temporal Modulation of Lights from Standard Cameras

Time-of-flight basics

Introduction to Photography

Using Your Digital Camera

References Photography, B. London and J. Upton Optics in Photography, R. Kingslake The Camera, The Negative, The Print, A. Adams

Rodenstock Products Photo Optics / Digital Imaging

3. Image formation, Fourier analysis and CTF theory. Paula da Fonseca

IMAGE ACQUISITION FOR DIGITAL PHOTOGRAMMETRY USING OF THE SHELF AND METRIC CAMERAS


Depth Estimation with a Plenoptic Camera

Theoretically Perfect Sensor

Computer Vision and Graphics (ee2031) Digital Image Processing I

Title: The Future of Photography is Computational Photography. Subtitle: 100 years after invention, Integral Photography becomes feasible

DSLR Cameras and Lenses. Paul Fodor

Shield Fields: Modeling and Capturing 3D Occluders

Theoretically Perfect Sensor

Canon Singapore Pte Ltd. Registration No R

We ll go over a few simple tips for digital photographers.

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1

CSCI 1290: Comp Photo

Chapter 12-Close-Up and Macro Photography

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

The 2 nd part of the photographic triangle

MODULE 6 Digital and Film Photography

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Fundamentals of Photography presented by Keith Bauer.

Multi-slice CT Image Reconstruction Jiang Hsieh, Ph.D.

The CAFADIS camera: a new tomographic wavefront sensor for Adaptive Optics.

Introduction to Computer Vision. Week 3, Fall 2010 Instructor: Prof. Ko Nishino

Digital Image Processing. Image Enhancement in the Frequency Domain

Aliasing. Can t draw smooth lines on discrete raster device get staircased lines ( jaggies ):

Target User TARGET USER:

Image Processing Applications in Real Life: 2D Fragmented Image and Document Reassembly and Frequency Division Multiplexed Imaging

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Michael Moody School of Pharmacy University of London 29/39 Brunswick Square London WC1N 1AX, U.K.

two using your LensbAby

Chapter 3-Camera Work

An Intuitive Explanation of Fourier Theory

DEPTH ESTIMATION FROM A VIDEO SEQUENCE WITH MOVING AND DEFORMABLE OBJECTS

Just some thoughts about cameras. Please contact me if you have any questions.

The Film and Digital camera. The use of photographic film was introduced by George Eastman who started

High Dynamic Range Imaging.

Single Image Motion Deblurring Using Transparency

An Approach on Hardware Design for Computationally Intensive Image Processing Applications based on Light Field Refocusing Algorithm

Unit - I Computer vision Fundamentals

High speed imaging with an Agfa consumer grade digital camera

On Plenoptic Multiplexing and Reconstruction

Focused Plenoptic Camera and Rendering. Focused Plenoptic Camera and Rendering. Todor Georgiev 1, a) 2, b) and Andrew Lumsdaine

Recovering dual-level rough surface parameters from simple lighting. Graphics Seminar, Fall 2010

Digital Image Processing COSC 6380/4393

DEPTH, STEREO AND FOCUS WITH LIGHT-FIELD CAMERAS

AN O(N 2 LOG(N)) PER PLANE FAST DISCRETE FOCAL STACK TRANSFORM

Image Processing Lecture 10

Imaging and Deconvolution

Thomas Abraham, PhD

Lecture 4 Image Enhancement in Spatial Domain

(and what the numbers mean)

Digital Imaging Study Questions Chapter 8 /100 Total Points Homework Grade

EE795: Computer Vision and Intelligent Systems

Optical image stabilization (IS)

Contents. Contents. Perfecting people shots Making your camera a friend.5. Beyond point and shoot Snapping to the next level...

Regularized Energy Minimization Models in Image Processing. Tibor Lukić. Faculty of Technical Sciences, University of Novi Sad, Serbia

A FAMILY OF POWERFUL CAMERAS WITH A CHOICE FOR EVERYONE

3D Shape and Indirect Appearance By Structured Light Transport

HTC ONE (M8) DUO CAMERA WHITE PAPERS

Buxton & District U3A Digital Photography Beginners Group Lesson 6: Understanding Exposure. 19 November 2013

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

A projector-camera setup for geometry-invariant frequency demultiplexing

Real-time scattering compensation for time-of-flight camera

Spatial Enhancement Definition

Transcription:

Mitsubishi Electric Research Laboratories Raskar 2007 Coding and Modulation in Cameras Ramesh Raskar with Ashok Veeraraghavan, Amit Agrawal, Jack Tumblin, Ankit Mohan Mitsubishi Electric Research Labs (MERL) Cambridge, MA

Overview Coding in Time Coded Exposure Motion Deblurring Coded Aperture Digital Refocussing Extended depth of field Optical Heterodyning Light Field Capture 4D to 2D mapping Coding in Space

Motion Blurred Input Photo

Approximate rectified crop of photo Image Deblurred by solving a linear system. No post-processing

Flutter Shutter Camera Raskar, Agrawal, Tumblin [Siggraph2006] Ferroelectric LCD shutter in front of the lens is turned opaque or transparent in a rapid binary sequence

Coded Exposure Photography: Assisting Motion Deblurring using Fluttered Shutter Short Exposure Traditional MURA Coded Shutter Captured Single Photo Deblurred Result Image is dark and noisy Result has Banding Artifacts and some spatial frequencies are lost Decoded image is as good as image of a static scene

Exposure choices for capturing fast moving objects Shutter On Shutter Off Time Traditional Camera Keep Shutter open for entire exposure duration The moving object creates smear Shutter On Shutter Off Time Coded Exposure Camera Flutter shutter open and closed with a psuedo-random binary sequence within exposure duration to encode the blur Shutter On Shutter Off Short Exposure Time Keep shutter open for very short duration. Avoids blur but image is dark and suffers from noise.

Coded Motion Blur = Preserves high spatial frequencies = Deconvolution filter frequency response is nearly flat = Deconvolution becomes a well-posed problem 0 10 Flat Ours 60 50 Log Magnitude 20 30 40 Log Magnitude 40 30 20 50 60 pi/5 2pi/5 3pi/5 4pi/5 pi Frequency Encoding Filter (Blurring) 10 0 Flat Ours pi/5 2pi/5 3pi/5 4pi/5 pi Frequency Decoding Filter (Deblurring)

Defocus Blurred Photo

Digital Refocusing Refocused Image on Person

Coded Exposure Coded Aperture Temporal 1-D 1 broadband code Spatial 2-D 2 broadband code

Digital Refocusing

Coded Aperture Camera The aperture of a 100 mm lens is modified Insert a coded mask with chosen binary pattern Rest of the camera is unmodified

Comparison with Traditional Camera Encoded Blur Camera Traditional Camera Captured Blurred Photo Deblurred Image

Comparison with Small Aperture Image Small Aperture Image Captured Blurred Image Deblurred Image

Digital Refocusing Captured Blurred Image

Digital Refocusing Refocused Image on Person

Can we capture more information by inserting a mask?

Mask Sensor Lens Mask Sensor Scanner Lens Encoded Blur Camera bellow Heterodyne Light Field Camera

Lenslet-based Light Field camera

Stanford Plenoptic Camera [Ng et al 2005] Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses 4000 4000 pixels 292 292 lenses = 14 14 pixels per lens

Digital Refocusing (Lenslet array)

Heterodyne Light Field Camera Lens Mask Sensor Scanner Lens bellow Canon flatbed scanner as the sensor. Mask is is ~ 1 cm in front of the sensor.

Capturing 4D light field 2D Sensor image Zoom in showing light field encoding

Movie: Digital Refocusing by taking 2D Projections of 4D Light Field

Movie: Different Views obtained from 2D Slices of 4D light field

How to Capture a 4D Signal with a 2D Sensor? Mask-based Modulation Software Demodulation Main Lens Object Mask Sensor Recovered Light Field Photographic Signal (Light Field) Carrier (High Frequency) Incident Modulated Signal Reference Carrier

Bright Background Object Reference Plane (x-plane) Main Lens Rays from Dark foreground Object Sensor θ Rays from Bright Background Object x x X Dark Foreground Object θ v d Primal Domain Light Field.

Bright Background Object Reference Plane (x-plane) Main Lens Rays from Dark foreground Object Sensor θ Rays from Bright Background Object x x X Dark Foreground Object θ v d Primal Domain Light Field.

Bright Background Object Reference Plane (x-plane) Main Lens Sensor θ x x X Dark Foreground Object θ v d Primal Domain Light Field. Fourier Transform Fourier Domain Light Field.

Bright Background Object Reference Plane (x-plane) Main Lens Rays from Dark foreground Object Sensor θ Rays from Bright Background Object x x X Dark Foreground Object θ v d Primal Domain Light Field. Fourier Transform Sensor: Inverse of 1-D Slice of FFT = 1-D Projection Inverse Fourier Transform Fourier Domain Light Field.

Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor Incident Light Field = Multiplication with the Light Field of the mask x X Dark Foreground Object θ Fourier Transform Convolution with the Fourier transform of the Mask Fourier Domain Light Field.

Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor x X Dark Foreground Object θ Convolution with the Fourier transform of the mask Fourier Domain Light Field.

Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor x X Dark Foreground Object θ Convolution with the Fourier transform of the mask Fourier Domain Light Field.

Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor x X Dark Foreground Object θ Convolution with the Fourier transform of the mask Fourier Domain Light Field.

Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor x X Dark Foreground Object θ Convolution with the Fourier transform of the mask Fourier Domain Light Field.

Bright Background Object Reference Plane (x-plane) Main Lens Mask Sensor x X Dark Foreground Object θ Convolution with the Fourier transform of the mask Fourier Domain Light Field.

Mask Sensor Lens Mask Sensor Scanner Lens Encoded Blur Camera bellow Heterodyne Light Field Camera

How to Capture 4D Light Field with 2D Sensor? f θ Band-limited Light Field f θ0 f x0 Sensor Slice f x Fourier Light Field Space

Extra sensor bandwidth cannot capture extra dimension of the light field f θ f θ0 Extra sensor bandwidth f x0 f x

f θ???????????? Sensor Slice f x

Solution: Modulation Theorem Make spectral copies of 2D light field f θ f θ0 f x0 f x Modulation Function

Solution: Modulation Theorem Make spectral copies of 2D light field f θ f θ0 f x0 f x Modulation Function = Sum of Cosines Mask Tile 1/f 0

How to Capture a 4D Signal with a 2D Sensor? Mask-based Modulation Software Demodulation Main Lens Object Mask Sensor Recovered Light Field Photographic Signal (Light Field) Carrier (High Frequency) Incident Modulated Signal Reference Carrier

Scanner Lens bellow Scanner 1D sensor Mask on top of scanner Mask = 4 cosines = 9 impulses in Fourier transform = 9 angular samples

Sensor slice captures entire Light Field f θ f θ0 f x0 Sensor Slice f x Modulation Function Modulated Light Field

Computing 4D Light Field 2D Sensor image, 1629*2052 2D Fourier Transform, 1629*2052 2D FFT 9*9=81 such tiles 4D Light Field 181*228*9*9 4D IFFT Reshape 2D tiles into 4D planes 181*228*9*9

To recover light field from sensor slice, Reshape in Fourier tiles f θ Recovered Light Field f x

Coded Masks For Cameras Mask Sensor Lens Mask Sensor Scanner Lens bellow Encoded Blur Camera Heterodyne Light Field Camera Capture Coded Blur Full resolution digital refocussing Coarse, broadband mask in aperture Convolution of sharp image with mask Capture Light Field Complex refocussing at reduced resolution Fine, narrowband mask close to sensor Modulation of incoming light field by mask

Mask Sensor Lens Mask Sensor Scanner Lens bellow Encoded Blur Camera Heterodyne Light Field Camera

Fourier transform of optimal 1D mask pattern is a set of 1D impulses Optimal 1D Mask is sum of cosines Optimal 2D mask: Sum of cosines in 2D Number of angular samples = Number of impulses in Fourier transform of the mask along that dimension

1/f 0 Zoom in of the cosine mask Mask is the sum of cosines with four harmonics. Plot shows the cosines. Since the physical mask cannot have negative values, a constant needs to be added.

Mask-based Heterodyning Object Main Lens Mask Sensor α = (d/v) (π/2) f θ v d f θ0 α f x0 f x Mask Modulation Function

Implementation Lens Mask Sensor Scanner Lens bellow Light Field Camera We use Canon flatbed scanner as the sensor and Nikkor lens in front. The mask is placed is about 1 cm in front of the sensor.