Next Generation CAT System

Similar documents
Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Hemispherical confocal imaging using turtleback reflector

Coding and Modulation in Cameras

Focal stacks and lightfields


Capturing and View-Dependent Rendering of Billboard Models

MAS.963 Special Topics: Computational Camera and Photography

CSCI 1290: Comp Photo

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

Surround Structured Lighting for Full Object Scanning

Acquisition and Visualization of Colored 3D Objects

Generating 5D Light Fields in Scattering Media for Representing 3D Images

Modeling Light. Michal Havlik

Modeling Light. Slides from Alexei A. Efros and others

Stereo and structured light

Computational Photography

Plenoptic camera and its Applications

Structured light , , Computational Photography Fall 2017, Lecture 27

Overview of Active Vision Techniques

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Hemispherical Confocal Imaging

Light Field Occlusion Removal

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection

Computational Photography

More and More on Light Fields. Last Lecture

Depth Estimation with a Plenoptic Camera

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005

Real-Time Video-Based Rendering from Multiple Cameras

Computational Photography

Image Base Rendering: An Introduction

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

COHERENCE AND INTERFERENCE

Computational Imaging for Self-Driving Vehicles

(De) Focusing on Global Light Transport for Active Scene Recovery

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Jinwei Gu. Ph.D. in Computer Science Dissertation: Measurement, Modeling, and Synthesis of Time-Varying Appearance of Natural

Structure from Motion and Multi- view Geometry. Last lecture

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Image Formation. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Techniques of Noninvasive Optical Tomographic Imaging

Announcements. Light. Properties of light. Light. Project status reports on Wednesday. Readings. Today. Readings Szeliski, 2.2, 2.3.

Modeling Light. Michal Havlik

Re-live the Movie Matrix : From Harry Nyquist to Image-Based Rendering. Tsuhan Chen Carnegie Mellon University Pittsburgh, USA

Radiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University

A Frequency Analysis of Light Transport

Estimating the surface normal of artwork using a DLP projector

Physics-based Vision: an Introduction

A Survey of Light Source Detection Methods

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

Multiple View Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

View Synthesis for Multiview Video Compression

DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION. Department of Artificial Intelligence Kyushu Institute of Technology

Introduction to Computer Graphics with WebGL

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN

Corona Sky Corona Sun Corona Light Create Camera About

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

Plenoptic Cameras. Bastian Goldlücke, Oliver Klehm, Sven Wanner, and Elmar Eisemann. 5.1 Introduction

Subsurface Scattering & Complex Material Properties

Interactive Vizualisation of Complex Real-World Light Sources

Some books on linear algebra

View Synthesis for Multiview Video Compression

Compensating for Motion During Direct-Global Separation

Surface Normal Deconvolution: Photometric Stereo for Optically Thick Translucent Objects

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Interactive Light Field Editing and Compositing

Jingyi Yu CISC 849. Department of Computer and Information Science

Representing the World

ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning

Real-Time Video- Based Modeling and Rendering of 3D Scenes

EECS 487: Interactive Computer Graphics

Optical Active 3D Scanning. Gianpaolo Palma

Understanding Variability

HIGH-SPEED THEE-DIMENSIONAL TOMOGRAPHIC IMAGING OF FRAGMENTS AND PRECISE STATISTICS FROM AN AUTOMATED ANALYSIS

CS635 Spring Department of Computer Science Purdue University

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

3D Shape and Indirect Appearance By Structured Light Transport

5LSH0 Advanced Topics Video & Analysis

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

Title: The Future of Photography is Computational Photography. Subtitle: 100 years after invention, Integral Photography becomes feasible

Digital Image Processing COSC 6380/4393

The Traditional Graphics Pipeline


Ray-Tracing. Misha Kazhdan

Towards Passive 6D Reflectance Field Displays

Refractive Shape from Light Field Distortion

Surround Structured Lighting for Full Object Scanning

A Theory of Plenoptic Multiplexing

Shading of a computer-generated hologram by zone plate modulation

EE795: Computer Vision and Intelligent Systems

Transcription:

Next Generation CAT System Jaewon Kim MIT Media Lab Advisor: Ramesh Raskar Associate Professor of Media Arts & Sciences MIT Media Lab Reader: V. Michael Bove Principal Research Scientist of Media Arts & Sciences MIT Media Lab Reader: Fredo Durand Associate Professor of EECS MIT CSAIL Reader: Yasuhiro Mukaigawa Associate Professor of Osaka University

Table Of Contents 1 Introduction 3 1.1 Contributions.................... 3 1.2 Related Work.................... 3 2 Proposed Work 4 2.1 4D light field..................... 4 2.1.1 A mask-based capturing method of 4D light field.................. 4 2.1.2 Decoding process of 4D light field image.. 4 2.2 Volumetric Reconstruction using ART....... 5 2.3 Getting a clear image through a scattering media.. 5 2.3.1 Image Acquisition using a Pinhole Array.. 5 2.3.2 Direct-Global Separation via Angular Filtering.................... 6 3 Evaluation 6 4 Required resources 7 5 Timeline 7 e-mail: jaewonk@media.mit.edu

Abstract Since the first CAT system has been introduced in 1971, the main form using scanning of an X-ray source hasn t been changed. Such scanning mechanism needs a huge system and long exposure time. In this thesis work, future CAT system concept will be proposed by approaching novel techniques which allow wearable and real-time CAT machine. At the first step, high speed CAT system without scanning will be implemented by 4D light field capturing technique. Next, a tomographic system using visible light instead of X-ray will be explored to develop a harmless and wearable CAT machine. In experimental setup, translucent objects and visible light sources will be used to imitate the effect of using X-ray sources. 1 Introduction Generally CAT system is used to get 3D inner view of a human body for medical purpose. CAT system has adopted X-ray scanning method for forty years and has been recognized as a huge, slow and harmful system. We often feel needs to monitor or check inside of our bodies for health condition or exploit inner shape of our body for various purposes like biometric information. This thesis will address techniques to accomplish such desires in our daily lives. The first goal is implementing a scan-free tomographic system. Such system is very important in terms of realizing a compact and very fast CAT system. The second goal is developing a wearable CAT system which allow easy access to CAT system through our common lives. For these goals, 4D light field capturing technique will be explored into CAT system. Current CAT system takes multiple images at different locations of X-ray source by scanning it rotationally. In such process, scanning X-ray source and acquiring multiple images are main factros to make the system huge and slow. To eliminate this process, we propose to use 4D light field capturing technique by a single image. 4D light field is defined by 2D spatial and 2D angular information of light. Former researchers proved that a lenslet or pinhole array can be used to capture 4D light field by a single shot image. If we apply such technique to CAT system, it is possible to contain multiple images taken at different positions of X-ray source onto a single image. Thus, we can store all different images from the X-ray sources onto a single shot image by putting multiple X-ray sources at different positions. Thus, a scan-free and instantaneous CAT system can be implemented by this technique. Another goal we want to explore is acquiring a clear image for inside view of human body with harmless light sources. Many researchers have presented various methods for such purpose in DOT(Diffuse Optical Tomography) field. Mostly, they have used NIR(Near Infrared LED) sources to view inside of certain parts in a human body but it is still difficult to get clear images for those parts with such harmless light sources. I will propose a new method to view clearly inside of human body or generally scattering media with harmless LED sources. In this method, we will try to separate light transmitted through scattering media or human skin into direct and scattered components. Then, we will generate a reconstructed image using only direct component rays which will give much sharper images than a normal image with scattered component rays. This thesis work is expected to contribute to various fields like medical imaging, handy body viewer and new biometrics. 1.1 Contributions This thesis will address novel techniques to realize next generation CAT system which is wearable, fast and harmless. The primary technical contributions are as follows. This thesis will approach a single shot CAT technique which will generate real-time 3D inner view of human body in scanfree system. By this technique, it would be possible to monitor the movement of organs like a heart in human body. Also, wearable and portable system will be realized with improved diffuse optical technique which is another main topic of this thesis. This thesis will develop a method for single-exposure separation of direct and global components of scattered, transmitted light using a pinhole or lenslet array placed closed to the image sensor. In the direct-only image, high-frequency details are restored and provide strong edge cues for scattering objects. We note that, due to its single-shot nature, this method can also be applied to dynamic scenes. This thesis will demonstrate enhanced volumetric reconstruction of scattering objects using direct component images. These separation methods are well-suited for applications in medical imaging, providing an internal view of scattering objects such as human skin using visible-wavelength light sources (rather than X-rays). 1.2 Related Work Light Field Capturing: The concept of capturing 4D light field was presented by Levoy[Levoy and Hanrahan 1996] and Gortler [GORTLER et al. 1996]. Also, Isaksen[ISAKSEN et al. 2000] described a practical method to compute 4D light field. The method of capturing 4D light field has been developed into various types by Levoy[Levoy et al. 2004] and Vaish[VAISH et al. 2004]. They presented the method to capture light field using micro lens arrays and camera arrays. The method using camera array need huge system and lens array method has aberration problem by many lenses. Recently, Veeraraghavan[Veeraraghavan et al. 2007] presented a simple way to capture 4D light field by a 2D thin mask. Shield Field Imaging: Douglas[Lanman et al. 2008] showed a way to reconstruct 3D shape of an occluder by a single shot. In his research, many LEDs cast silhouettes at different directions onto a screen. The silhouettes are coded by a mask inside of screen and captured with a single exposure. From the captured image, each silhouette casted by each LED source are decoded into low resolution. By combining the images, 3D outer shape of the occluder is reconstructed. Direct-Global Separation: Direct-global separation of light is widely studied in diverse fields spanning computer vision, graphics, optics, and physics. Due to the complexities of scattering, reflection, and refraction, analytical methods do not achieve satisfactory results in practical situations. In computer vision and graphics, Nayar [Nayar et al. 2006] present an effective method to separate direct and global components from a scene by projecting a sequence of high-frequency patterns. Their work is one of the first to handle arbitrary natural scenes. However, their solution requires temporally-multiplexed illumination, limited the utility for dynamic scenes. Nasu [Nasu et al. 2007] present an accelerated method using a sequence of three patterns. In addition, Rosen and Abookasis [Rosen and Abookasis 2004] present a descattering method using a microlens array. Tomographic Reconstruction: Trifonov [Trifonov et al. 2006] consider volumetric reconstruction of transparent objects using tomography. Such an approach has the advantage of avoiding occlusion problems by imaging in a circular arc about the object. In this paper, the 3-D shapes of transparent objects were reconstructed in high-resolution using a limited baseline (rather than a full 360 degree turntable sequence). In our paper, we reconstructed the 3-D

Figure 1: light field parameterization in 1D schematic diagram of imaging setup Figure 4: Current experiment setup Figure 2: Small section of the 150 100 pinhole array mask used in our prototype, with a pinhole diameter of 428 microns. shape of scattering objects using only eight direct-only images using the well-established algebraic reconstruction technique (ART). 2 Proposed Work 2.1 4D light field Figure 1 defines 2D light field coordinate. A ray from a point of lighting plane is projected to a point on sensor in the spatial parameter, x, and the angular parameter, θ. In real world, the system will have four parameters, x, y, θ and φ which define 4D light field. Figure 3: An Inset image focused on diffuser in Figure 2 2.1.1 Figure 5: Coded image of 4D light field penetrating a wine glass (Image Resolution: 3872x2592) schematic diagram in Figure 1. 6x6 LEDs in lighting plane generate different 2D projection image of an object onto a screen. Figure 5 is a captured 4D light field image of a wine glass in this scheme. 2.1.2 Decoding process of 4D light field image In decoding process, each 2D spatial image is generated according to each angular division. In our scheme geometric positions of lighting, a diffuser and a mask planes are carefully chosen in the condition that angular resolusion is same with the number of light sources. Thus, the number of images generated by decoding process is same with the number of LEDs in light plane and each one is an image projected by each LED. Figure 11 explains the decoding process. Pixels at same angular region are collected and generate a 2D spatial image. By repeating this process, we will get N images which is same with the number of LEDs. The resolution of each decoded image is number of pinholes in the mask when a pinhole array mask is used. A mask-based capturing method of 4D light field Figure 1 explains a process to capture 2D light filed. The rays from LEDs are projected to a diffuser through translucent media after being modulated by a mask. In this figure, the mask plays a role to transform 2D light field, x and θ, to 1D spatical coordinate, x. The transformed 1D signals are sensed by a camera. In real world, a mask transforms 4D light field, 2D spatial and 2D angular information of light, to 2D spatial information. A mask is a printed 2D pattern array on a thin and transparent film. There are various kinds of masks for capturing 4D light field and a pinhole array mask in Figure 2 is currently being used. Figure 3 shows a small part of an image captured by actual system with 6x6 LED arrays. The red box area contains 2D angular information at a spatial point and each white tile in the red box gives a ray intensity value at a specific angular and spatial domain. In this figure, angular resolution is 6x6 which is same with the number of LEDs in lighting plane. Figure 4 shows current experiment setup which exactly match with the Figure 6: Decoding process of a coded 4D light field image Figure 7 shows decoded image set, 4x4, by this way. The resolutions of the original image in Figure 5 and a decoded image in Figure 7 are 3872x2592 and 150x100, respectively. Each decoded image gives different angular view of the object and consequently it proves that the multiple views of any object can be obtained instaneously by a single shot image using thie method.

Figure 7: 16 Decoded images from a coded light field image of a wine glass and a straw(the resolution of each image is 150x100) 2.2 Volumetric Reconstruction using ART I = I 0exp( N a if i) (1) We will use an algebraic reconstruction technique (ART) presented by Roh [Roh et al. 2004] to reconstruct 3-D shape of scattering objects following tradition short-baseline tomography approaches. Generally, when a ray passes through an object, the change in intensity can be modeled by Equation (1). I 0 is the original intensity of the ray and I is the resultant intensity after penetrating N layers inside the object. In this equation, a i means the distance penetrating at i th material of which absorption coefficient is f i as depicted in Figure 8. Equation (2) is the logarithmic expression of Equation (1). Note that Equation (2) can be rewritten for the j th ray as follows. i=1 h = AF (4) a 1 1 a 1 2 a 1 N a 1 a 2 1 a 2 2 a 2 N where A =... = a 2. a M 1 a M 2 a M N a M F R N, h R M, A R M N We note that Equation (5) can be used to get the next step value, f i(t+1), from the parameters at the current i th step. In this equation, t and λ are the step index and a coefficient related with convergence parameter, respectively. The values of g and h are the measured value from sensing and the calculation value from Equation (4) using f at the current step. As the iteration step, t, increases, the error term, g j -h j (t), decreases and f i(t) gets closer to the exact value. Finally, we can get approximated reconstruction result, f. Figure 9 and Figure 10 show 3D reconstruction results of two objects by this ART method using 8 images taken at different viewing angles. Thus, by combining the 4D light field capturing technique in previous section and this ART method, we expect that the whole shape of any translucent objects can be reconstructed instantaneously with multiple light sources. Also, it will allow a scan-free and fast CAT system when multiple X-ray sources are applied. f i(t + 1) = f i(t) + λ gj h j (t) N i (aj i )2 aj i (5) N h = log(i 0/I) = a if i (2) i=1 h j (t) = N i=1 a j i fi (3) (a) Captured images at 8 different viewing angles (b) 3D reconstrction results by ART method Figure 8: Projection model of a ray. Now, our problem is finding f i values which correspond to the density information within the reconstruction region. Equation (3) can be described using Equation (4) in matrix form. A matrix represents the projective geometry of rays calculated for the emitting position of rays, and the received position for a predetermined reconstruction region in which the object is assumed to be placed. The vector h is related to the sensed intensity values. In practice, our implementation of ART takes approximately five minutes for our data sets. Figure 9: Tomographic reconstruction for a dog-shape objects. 2.3 Getting a clear image through a scattering media 2.3.1 Image Acquisition using a Pinhole Array In an imaging setup like Figure 11 rays emitted from a light source are scattered through a scattering media and the original and scattered rays are projected to a screen. The original rays emited from a light source are called direct component rays and additional rays through scattering media are called global component rays. When

(a) Captured images at 8 different viewing angles Figure 11: Diagram of capture setup. A diffuser is used to form an image through a pinhole array mask. A high-resolution camera captures the array of pinhole images in a single exposure. (b) 3D reconstrction results by ART method Figure 10: Tomographic reconstruction for a wine glass. there is an object inside of the scattering media, it looks blur because the global component rays which has very low frequency are overlappted with direct componentes rays at sensing. Thus, it is easily inferred that a clear image for the inside object can be obtained by separating global components out from the sensed image. A pinhole or lenslet array mask can be applied for this purpose when it is placed in front of a diffuser as shown in Figure 11. Figure 12 shows how the direct and global rays are formed through the pinhole or lenslet array. In the image formation, there exist two distinct regions, mixed region of the two components and pure global rays region. We will try to separate scattered components from sensed values at the mixed region by fitting global component values at the pure global region under each pinhole. As shown in Figure 13, the diffuser-plane image consists of a set of sharp peaks under each pinhole in the absence of any scattering media between the light source and diffuser. As shown on the right, the pinhole images extended, blurred patterns when a scattering objects is placed between the light source and camera; ulimately, the globally-scattered light causes samples to appear in mixed neighboring the central pixel under each pinhole. This blurring of the received image would be impossible to separate without the angular samples contributed by the pinhole array mask. As shown in Figure 13, the angular sample directly under each pinhole can be used to estimate a direct plus global transmission along the ray between a given pixel and the light source. Similarly, any non-zero neighboring pixels can be fully attributed to global illumination due to volumetric scattering. From Figure 13, we infer there are two regions in image under each pinhole; the first region consists of a mixed signal due to cross-talk between the direct and global components. The second region represents a pure global component. In the following section, we show a simple method for analyzing such imagery to estimate separate direct and global components for multiple-scattering media. 2.3.2 Direct-Global Separation via Angular Filtering In this section we consider direct-global separation for a 1-D sensor and a 2-D scene, while the results can be trivially extended to 2-D sensors and 3-D volumes. As shown in the second one of Fig- Figure 12: Image formation model for a multiple-scattering scene using a single pinhole. Note that the directly-transmitted ray impinges on a compact region below the pinhole, yet mixes with scattered global rays. The received signal located away from the directonly peak is due to scattered rays. ure 14, a single pinhole image is defined as two separate regions, a pure global component region and a region of mixed direct and global components. We represent the received intensity at each diffuser-plane pixel as, {L 0, L 1,..., L n}, when a scattering object is placed between the light source and the diffuser. The individual sensor values are modeled as L 0 = G 0 + D 0. L n = G n + D n, where {G n} and {D n} represent the underlying global and direct intensities measured in the sensor plane, respectively. As shown in Figure 14, a straightforward algorithm can be used to estimate the direct and global components received at each pinhole. First, we estimate a quadratic polynomial fit to the values outside the non-zero region of a pinhole image obtained with no scattering objects presents (in our system, the central region of 7 7 pixels is excluded). Note that in this region, L i G i. Afterwards, the polynomial model can be used to approximate values of the global components {G n} in the region directly below each pinhole; note that this region is subject to mixing and the global component must be approximated from the global-only region. Finally, a direct-only image can be estimated by subtracting the estimated global component for the central pixel under a pinhole from the measured value, such that D 0 L 0 G 0. 3 Evaluation General CAT systems use X-ray sources but I will experiment with visible light sources and translucent objects to reconstruct same imaging condition with using X-ray. X-ray images show absorbed (6)

Intensity Intensity Intensity Intensity Reference data captured without any object 250 200 150 100 50 0 200 150 0 5 10 15 20 25 30 35 40 Pure global component region Mixed region by direct and global components Pure global component region 100 50 0 L 3 L 2 L 1 L 0 L 1 L 2 L 3 0 5 10 15 20 25 30 35 40 Figure 13: Pinhole images. (Left) Received images for each pinhole camera, for the case when no scattering object is present. (Right) Pinhole images when a scattering object is present. 200 150 Known global component Unknown global components Known global component 100 amount of X-ray when it is transmit through a media in 2D spatial domain. Such imaging condition can be imitated with translucent media when visible rays are transmitted through it. The single shot CAT technique will be evaluated by the quality of 3D generation and imaging time which is a key factor for real-time 3D view generation. Also, successful implementation of a wearable CAT system will be evaluated as another main goal of this thesis. 4 Required resources To imitate X-ray imaging, translucent objects and LED sources will be used. I already have some translucent objects like wine glasses and toys and it is easy to find such objects around us. Multiple LED sources have been prepared as well and they are aligned in a metal frame. To implement 4D light field capturing technique in large angular range, we need a pinhole array film or a lenslet array in big size. Now, we have some of them but might be required to make new one to acquire higher spatial resolution. A normal DSLR digital camera will be used to get images. For wearable application, we need to compose newly the mentioned stuff for a part of human body. In such setup, we will be required to use an imaging sensor and a lenslet array in small size. Small Dragonfly cameras will be good enough for the purpose and we have a small lenslet array in 1in 1in. If we need to get new lenslet arrays, we can purchase a proper one from AOA Company in Cambridge. 5 Timeline References ATCHESON, B., IHRKE, I., HEIDRICH, W., TEVS, A., BRADLEY, D., MAGNOR, M., AND SEIDEL, H. P. 2008. Time-resolved 3d capture of non-stationary gas flows. ACM Transactions on Graphics. GORTLER, S. J., GRZESZCZUK, R., SZELISKI, R., AND COHEN, M. F. 1996. The lumigraph. SIGGRAPH. GU, J., NAYAR, S., GRINSPUN, E., BELHUMEUR, P., AND RA- MAMOORTHI, R. 2008. Compressive structured light for recovering inhomogeneous participating media. In European Conference on Computer Vision (ECCV). ISAKSEN, A., MCMILLAN, L., AND GORTLER, S. 2000. Dynamically reparameterized light fields. Proc. SIGGRAPH. 50 0 G 3 G 2 G 1 G 0 G 1 G 2 G 3 0 5 10 15 20 25 30 35 40 150 100 50 Computed Direct components 0 D 3 D 2 D 1 D 0 D 1 D 2 D 3 0 5 10 15 20 25 30 35 40 Pixel (p) Figure 14: (From top to bottom) First, a 1-D sensor image for a single LED illuminating a diffuser with no object present. Second, an image with a scattering object present. Third, measured (black) and estimated polynomial fit (red) for global-only component. Fourth, the direct-only image formed by subtracting the second from the third. JENSEN, H., MARSCHNER, S., LEBOY, M., AND HANRAHAN, P. 2001. A practical model for subsurface light transport. SIG- GRAPH, 511 518. LANMAN, D., RASKAR, R., AGRAWAL, A., AND TAUBIN, G. 2008. Modeling and capturing 3d occluders. SIGGRAPH Asia 2008. LEVOY, M., AND HANRAHAN, P. 1996. Light field rendering. SIGGRAPH96, 31 42. LEVOY, M., CHEN, B., VAISH, V., HOROWITZ, M., MC- DOWALL, M., AND BOLAS, M. 2004. Synthetic aperture confocal imaging. ACM Transactions on Graphics 23, 825 834. NARASIMHAN, S. G., NAYAR, S. K., SUN, B., AND KOPPAL, S. J. 2005. Structured light in scattering media. In Proc. IEEE ICCV 1, 420 427. NASU, O., HIURA, S., AND SATO, K. 2007. Analysis of light transport based on the separation of direct and indirect omponents. IEEE Intl. Workshop on Projector-Comera Systems(ProCams 2007). NAYAR, S., KRICHNAN, G., GROSSBERG, M., AND RASKAR, R. 2006. Fast separation of direct and global components of a scene using high frequency illumination. Transactions on Graphics 12, 3, 935 943.

NG, R., LEBOY, M., BREDIF, M., DUVAL, M., HOROWITZ, G., AND HANRAHAN, P. 2004. Light field photography with a hand-held plenoptic camera. Tech. rep, Stanford University. ROH, Y. J., PARK, W. S., CHO, H. S., AND JEON, H. J. 2004. Implementation of uniform and simultaneous art for 3-d reconstruction in an x-ray imaging system. IEEE Proceedings, Vision, Image and Signal Processing 151. ROSEN, J., AND ABOOKASIS, D. 2004. Noninvasive optical imaging by speckle ensemble. Optics Letters 29, 3. SUN, B., RAMAMOORTHI, R., NARASIMHAN, S. G., AND NA- YAR, S. K. 2005. A practical analytic single scattering model for real time rendering. ACM Transactions on Graphics, 1040 1049. TRIFONOV, B., BRADLEY, D., AND HEIDRICH, W. 2006. Tomographic reconstruction of transparent objects. Eurographics Symposium on Rendering. TUCHIN, V. 2000. Tissue optics. SPIE. VAISH, V., WILBURN, B., JOSHI, N., AND LEVOY, M. 2004. Using plane + parallax for calibrating dense camera arrays. Proc. Conf. Computer Vision and Pattern Recognition. VEERARAGHAVAN, A., RASKAR, R., AGRAWAL, A., MOHAN, A., AND TUMBLIN, J. 2007. Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM SIG- GRAPH 2007.

Figure 15: Timeline