High Dynamic Range Imaging.

Similar documents
High Dynamic Range Images

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Image-based Lighting (Part 2)

Recap of Previous Lecture

High Dynamic Range Images

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources

CSCI 1290: Comp Photo

Recovering High Dynamic Range Radiance Maps in Matlab

Cell-Sensitive Microscopy Imaging for Cell Image Segmentation

High dynamic range imaging

IMAGE BASED RENDERING: Using High Dynamic Range Photographs to Light Architectural Scenes

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

Reflection Mapping

Image Enhancement 3-1

Capturing light. Source: A. Efros

An Intuitive Explanation of Fourier Theory

Estimating the surface normal of artwork using a DLP projector

Motion Estimation. There are three main types (or applications) of motion estimation:

Rendering Synthetic Objects into Real Scenes. based on [Debevec98]

Image Processing Lecture 10

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a

High Dynamic Range Image Texture Mapping based on VRML

Perceptual Effects in Real-time Tone Mapping

A Survey of Light Source Detection Methods

Light. Properties of light. What is light? Today What is light? How do we measure it? How does light propagate? How does light interact with matter?

Image restoration. Restoration: Enhancement:

Radiometric Calibration from a Single Image

Last Lecture. Bayer pattern. Focal Length F-stop Depth of Field Color Capture. Prism. Your eye. Mirror. (flipped for exposure) Film/sensor.

Image Enhancement. Digital Image Processing, Pratt Chapter 10 (pages ) Part 1: pixel-based operations

Introduction to Digital Image Processing

Nonlinear Multiresolution Image Blending

Analysis and extensions of the Frankle-McCann

Multi-exposure Fusion Features

Digital Image Processing

Digital Image Processing

Capture and Displays CS 211A

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Motivation. Intensity Levels

Contrast Optimization A new way to optimize performance Kenneth Moore, Technical Fellow

Blur Space Iterative De-blurring

Subpixel accurate refinement of disparity maps using stereo correspondences

Other approaches to obtaining 3D structure

Lecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing

Chapter 3: Intensity Transformations and Spatial Filtering

Stereo vision. Many slides adapted from Steve Seitz

Point operation Spatial operation Transform operation Pseudocoloring

Photometric Stereo with Auto-Radiometric Calibration

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Light source separation from image sequences of oscillating lights

Dynamic range Physically Based Rendering. High dynamic range imaging. HDR image capture Exposure time from 30 s to 1 ms in 1-stop increments.

Coding and Modulation in Cameras

(and what the numbers mean)

Professional. Technical Guide N-Log Recording

Level lines based disocclusion

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Edge and local feature detection - 2. Importance of edge detection in computer vision

Projective Geometry and Camera Models

CoE4TN3 Medical Image Processing

References Photography, B. London and J. Upton Optics in Photography, R. Kingslake The Camera, The Negative, The Print, A. Adams

Computational Photography: Real Time Plenoptic Rendering

Image-based Lighting

Image Formation I Chapter 1 (Forsyth&Ponce) Cameras

CONSTRAIN PROPAGATION FOR GHOST REMOVAL IN HIGH DYNAMIC RANGE IMAGES

Filtering Images. Contents

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION

IMAGE DE-NOISING IN WAVELET DOMAIN

Miniaturized Camera Systems for Microfactories

Dense Image-based Motion Estimation Algorithms & Optical Flow

Computer Vision I - Filtering and Feature detection

Robust Radiometric Calibration for Dynamic Scenes in the Wild

Introduction to Inverse Problems

CS5670: Computer Vision

Computer Vision I - Basics of Image Processing Part 2

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling

An efficient and user-friendly tone mapping operator

Computer Vision I - Basics of Image Processing Part 1

Motivation. Gray Levels

What Can Be Known about the Radiometric Response from Images?

Digital Image Processing COSC 6380/4393

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois

Manifold Preserving Edit Propagation

FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU

Today. Motivation. Motivation. Image gradient. Image gradient. Computational Photography

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

HTC ONE (M8) DUO CAMERA WHITE PAPERS

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

Computer Vision I - Algorithms and Applications: Basics of Image Processing

Adobe Dimension CC: The Productivity of Design Visualization

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester

Image Formation I Chapter 1 (Forsyth&Ponce) Cameras

CS 563 Advanced Topics in Computer Graphics Film and Image Pipeline (Ch. 8) Physically Based Rendering by Travis Grant.

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Image Fusion For Context Enhancement and Video Surrealism

Intensity Transformations and Spatial Filtering

Transcription:

High Dynamic Range Imaging

High Dynamic Range [3] In photography, dynamic range (DR) is measured in exposure value (EV) differences or stops, between the brightest and darkest parts of the image that show detail.

HDR imaging [4] HDR imaging: this may involve shooting digital images at different exposures and combining them selectively to retain detail in light and dark areas despite the limited dynamic range of the sensor array. An example of a wide dynamic range image, and the two different images it is created from, the short exposure image and the long exposure image.

Tone Mapping [3] Tone mapping: this reduces overall contrast to facilitate display of HDR images on devices with lower dynamic range (LDR), can be applied to produce images with preserved or exaggerated local contrast for artistic effect. Simple contrast reduction Local tone mapping

Brightness and Intensity Value [1] When we photograph a scene, either with film or an electronic imaging array, and digitize the photograph to obtain a two-dimensional array of brightness values, these values are rarely true measurement of relative radiance in the scene. This nonlinear mapping is the composition of several nonlinear mappings that occur in the photographic process.

Characteristic Curve [1] The exposure X is defined as the product of the irradiance E at the film and exposure time, Δt, so that its units is Jm -2. X E t After the development, scanning and digitizing processes, we obtain a digital number Z, which is a nonlinear function of the original exposure X at the pixel. Characteristic curve of the film f Z f X We make the reasonable assumption that the function f is monotonically increasing, so its inverse f -1 is well defined.

Our goal: Irradiance Reconstruction [1] Measured intensity, Z X f 1 Z Exposure, X E X t Irradiance, E

Formulization [1] The input to our algorithm is a number of digitized photographs taken from the same vantage point with different known exposure durations Δt j. We will assume that the scene is static and that this process is completed quickly enough that lighting changes can be safely ignored. It can then be assumed that the film irradiance values E i for each pixel i are constant. We will denote pixel values by Z ij where i is a spatial index over pixels and j indexes over exposure times Δt j.

Minimization Problem [1] We wish to recover the function g and the irradiances E i that best satisfy the set of equations arising from Equation 2 in a least-squared error sense. We note that recovering g only requires recovering the finite number of values that g(z) can take since the domain of Z, pixel brightness values, is finite. Letting Z min and Z max be the least and greatest pixel values (integers), N be the number of pixel locations and P be the number of photographs, we formulate the problem as one of finding the (Z max - Z min + 1) values of g(z) and the N values of lne i that minimize the following quadratic objective function: Smoothness term Minimizing is a straightforward linear least squares problem.

Minimization Problem [1] Since g(z) will typically have a steep slope near Z min and Z max, we should expect that g(z) will be less smooth and will fit the data more poorly near these extremes. To recognize this, we can introduce a weighting function w(z) to emphasize the smoothness and fitting terms toward the middle of the curve. A sensible choice of w is a simple hat function: Equation 3 now becomes:

Minimization Problem [1] Clearly, the pixel locations should be chosen so that they have a reasonably even distribution of pixel values from Z min to Z max, and so that they are spatially well distributed in the image. Furthermore, the pixels are best sampled from regions of the image with low intensity variance so that radiance can be assumed to be constant across the area of the pixel, and the effect of optical blur of the imaging system is minimized.

Construction of Radiance Map [1] Once the response curve g is recovered, it can be used to quickly convert pixel values to relative radiance values, assuming the exposure Δt j is known. For robustness, and to recover high dynamic range radiance values, we should use all the available exposures for a particular pixel to compute its radiance. For this, we reuse the weighting function in Equation 4 to give higher weight to exposures in which the pixel s value is closer to the middle of the response function:

Solution [1]

Solution [1] Z(i,j) A i+n x b k(i,j) 1) wij A(k,Z(i,j)+1)=wij w(z ij ) -wij -wij -w(z ij ) X g g(z) n=256 = wijⅹb(i,j) w(z ij +1)ⅹlnΔt j The data- fitting equations K=IⅹJ K=IⅹJ A(k,i:i+2)=lⅹw(i+1), -2ⅹlⅹw(i+1), lⅹw(i+1) for λw(z)g (z) le ln(e i ) I n+1 0 The smoothness equations n=256 I 1 1) (i,j) k: 모든 pixel 에대한일차원 index

Experimental Results [1]

Experimental Results [1]

Experimental Results [1]

HDR SHOP [2]

HDR SHOP: Download V1.0

HDR SHOP: Tutorial

HDR SHOP: Camera Curve Calibration

HDR SHOP: Camera Curve Calibration

HDR SHOP: Data

Application: Image-Based Lighting [5]

Application: Image-Based Lighting [5]

Application: Image-Based Lighting [5]

Application: Image-Based Lighting [5]

References 1. Paul E. Debevec and Jitendra Malik, Recovering High Dynamic Range Radiance Maps from Photographs, In SIGGRAPH 97, August 1997. 2. HDR shop, available at http://projects.ict.usc.edu/graphics/hdrshop/ 3. Wikipedia, High Dynamic Range Imaging, available at http://www.wikipedia.org 4. Wikipedia, High Dynamic Range, available at http://www.wikipedia.org 5. Image-Based Lighting, available at http://www.debevec.org/ibl2001/