Next Generation CAT System

Size: px
Start display at page:

Download "Next Generation CAT System"

Transcription

1 Next Generation CAT System Jaewon Kim MIT Media Lab Advisor: Ramesh Raskar Associate Professor of Media Arts & Sciences MIT Media Lab Reader: V. Michael Bove Principal Research Scientist of Media Arts & Sciences MIT Media Lab Reader: Fredo Durand Associate Professor of EECS MIT CSAIL Reader: Yasuhiro Mukaigawa Associate Professor of Osaka University

2 Table Of Contents 1 Introduction Contributions Related Work Proposed Work D light field A mask-based capturing method of 4D light field Decoding process of 4D light field image Volumetric Reconstruction using ART Getting a clear image through a scattering media Image Acquisition using a Pinhole Array Direct-Global Separation via Angular Filtering Evaluation 6 4 Required resources 7 5 Timeline 7 jaewonk@media.mit.edu

3 Abstract Since the first CAT system has been introduced in 1971, the main form using scanning of an X-ray source hasn t been changed. Such scanning mechanism needs a huge system and long exposure time. In this thesis work, future CAT system concept will be proposed by approaching novel techniques which allow wearable and real-time CAT machine. At the first step, high speed CAT system without scanning will be implemented by 4D light field capturing technique. Next, a tomographic system using visible light instead of X-ray will be explored to develop a harmless and wearable CAT machine. In experimental setup, translucent objects and visible light sources will be used to imitate the effect of using X-ray sources. 1 Introduction Generally CAT system is used to get 3D inner view of a human body for medical purpose. CAT system has adopted X-ray scanning method for forty years and has been recognized as a huge, slow and harmful system. We often feel needs to monitor or check inside of our bodies for health condition or exploit inner shape of our body for various purposes like biometric information. This thesis will address techniques to accomplish such desires in our daily lives. The first goal is implementing a scan-free tomographic system. Such system is very important in terms of realizing a compact and very fast CAT system. The second goal is developing a wearable CAT system which allow easy access to CAT system through our common lives. For these goals, 4D light field capturing technique will be explored into CAT system. Current CAT system takes multiple images at different locations of X-ray source by scanning it rotationally. In such process, scanning X-ray source and acquiring multiple images are main factros to make the system huge and slow. To eliminate this process, we propose to use 4D light field capturing technique by a single image. 4D light field is defined by 2D spatial and 2D angular information of light. Former researchers proved that a lenslet or pinhole array can be used to capture 4D light field by a single shot image. If we apply such technique to CAT system, it is possible to contain multiple images taken at different positions of X-ray source onto a single image. Thus, we can store all different images from the X-ray sources onto a single shot image by putting multiple X-ray sources at different positions. Thus, a scan-free and instantaneous CAT system can be implemented by this technique. Another goal we want to explore is acquiring a clear image for inside view of human body with harmless light sources. Many researchers have presented various methods for such purpose in DOT(Diffuse Optical Tomography) field. Mostly, they have used NIR(Near Infrared LED) sources to view inside of certain parts in a human body but it is still difficult to get clear images for those parts with such harmless light sources. I will propose a new method to view clearly inside of human body or generally scattering media with harmless LED sources. In this method, we will try to separate light transmitted through scattering media or human skin into direct and scattered components. Then, we will generate a reconstructed image using only direct component rays which will give much sharper images than a normal image with scattered component rays. This thesis work is expected to contribute to various fields like medical imaging, handy body viewer and new biometrics. 1.1 Contributions This thesis will address novel techniques to realize next generation CAT system which is wearable, fast and harmless. The primary technical contributions are as follows. This thesis will approach a single shot CAT technique which will generate real-time 3D inner view of human body in scanfree system. By this technique, it would be possible to monitor the movement of organs like a heart in human body. Also, wearable and portable system will be realized with improved diffuse optical technique which is another main topic of this thesis. This thesis will develop a method for single-exposure separation of direct and global components of scattered, transmitted light using a pinhole or lenslet array placed closed to the image sensor. In the direct-only image, high-frequency details are restored and provide strong edge cues for scattering objects. We note that, due to its single-shot nature, this method can also be applied to dynamic scenes. This thesis will demonstrate enhanced volumetric reconstruction of scattering objects using direct component images. These separation methods are well-suited for applications in medical imaging, providing an internal view of scattering objects such as human skin using visible-wavelength light sources (rather than X-rays). 1.2 Related Work Light Field Capturing: The concept of capturing 4D light field was presented by Levoy[Levoy and Hanrahan 1996] and Gortler [GORTLER et al. 1996]. Also, Isaksen[ISAKSEN et al. 2000] described a practical method to compute 4D light field. The method of capturing 4D light field has been developed into various types by Levoy[Levoy et al. 2004] and Vaish[VAISH et al. 2004]. They presented the method to capture light field using micro lens arrays and camera arrays. The method using camera array need huge system and lens array method has aberration problem by many lenses. Recently, Veeraraghavan[Veeraraghavan et al. 2007] presented a simple way to capture 4D light field by a 2D thin mask. Shield Field Imaging: Douglas[Lanman et al. 2008] showed a way to reconstruct 3D shape of an occluder by a single shot. In his research, many LEDs cast silhouettes at different directions onto a screen. The silhouettes are coded by a mask inside of screen and captured with a single exposure. From the captured image, each silhouette casted by each LED source are decoded into low resolution. By combining the images, 3D outer shape of the occluder is reconstructed. Direct-Global Separation: Direct-global separation of light is widely studied in diverse fields spanning computer vision, graphics, optics, and physics. Due to the complexities of scattering, reflection, and refraction, analytical methods do not achieve satisfactory results in practical situations. In computer vision and graphics, Nayar [Nayar et al. 2006] present an effective method to separate direct and global components from a scene by projecting a sequence of high-frequency patterns. Their work is one of the first to handle arbitrary natural scenes. However, their solution requires temporally-multiplexed illumination, limited the utility for dynamic scenes. Nasu [Nasu et al. 2007] present an accelerated method using a sequence of three patterns. In addition, Rosen and Abookasis [Rosen and Abookasis 2004] present a descattering method using a microlens array. Tomographic Reconstruction: Trifonov [Trifonov et al. 2006] consider volumetric reconstruction of transparent objects using tomography. Such an approach has the advantage of avoiding occlusion problems by imaging in a circular arc about the object. In this paper, the 3-D shapes of transparent objects were reconstructed in high-resolution using a limited baseline (rather than a full 360 degree turntable sequence). In our paper, we reconstructed the 3-D

4 Figure 1: light field parameterization in 1D schematic diagram of imaging setup Figure 4: Current experiment setup Figure 2: Small section of the pinhole array mask used in our prototype, with a pinhole diameter of 428 microns. shape of scattering objects using only eight direct-only images using the well-established algebraic reconstruction technique (ART). 2 Proposed Work 2.1 4D light field Figure 1 defines 2D light field coordinate. A ray from a point of lighting plane is projected to a point on sensor in the spatial parameter, x, and the angular parameter, θ. In real world, the system will have four parameters, x, y, θ and φ which define 4D light field. Figure 3: An Inset image focused on diffuser in Figure Figure 5: Coded image of 4D light field penetrating a wine glass (Image Resolution: 3872x2592) schematic diagram in Figure 1. 6x6 LEDs in lighting plane generate different 2D projection image of an object onto a screen. Figure 5 is a captured 4D light field image of a wine glass in this scheme Decoding process of 4D light field image In decoding process, each 2D spatial image is generated according to each angular division. In our scheme geometric positions of lighting, a diffuser and a mask planes are carefully chosen in the condition that angular resolusion is same with the number of light sources. Thus, the number of images generated by decoding process is same with the number of LEDs in light plane and each one is an image projected by each LED. Figure 11 explains the decoding process. Pixels at same angular region are collected and generate a 2D spatial image. By repeating this process, we will get N images which is same with the number of LEDs. The resolution of each decoded image is number of pinholes in the mask when a pinhole array mask is used. A mask-based capturing method of 4D light field Figure 1 explains a process to capture 2D light filed. The rays from LEDs are projected to a diffuser through translucent media after being modulated by a mask. In this figure, the mask plays a role to transform 2D light field, x and θ, to 1D spatical coordinate, x. The transformed 1D signals are sensed by a camera. In real world, a mask transforms 4D light field, 2D spatial and 2D angular information of light, to 2D spatial information. A mask is a printed 2D pattern array on a thin and transparent film. There are various kinds of masks for capturing 4D light field and a pinhole array mask in Figure 2 is currently being used. Figure 3 shows a small part of an image captured by actual system with 6x6 LED arrays. The red box area contains 2D angular information at a spatial point and each white tile in the red box gives a ray intensity value at a specific angular and spatial domain. In this figure, angular resolution is 6x6 which is same with the number of LEDs in lighting plane. Figure 4 shows current experiment setup which exactly match with the Figure 6: Decoding process of a coded 4D light field image Figure 7 shows decoded image set, 4x4, by this way. The resolutions of the original image in Figure 5 and a decoded image in Figure 7 are 3872x2592 and 150x100, respectively. Each decoded image gives different angular view of the object and consequently it proves that the multiple views of any object can be obtained instaneously by a single shot image using thie method.

5 Figure 7: 16 Decoded images from a coded light field image of a wine glass and a straw(the resolution of each image is 150x100) 2.2 Volumetric Reconstruction using ART I = I 0exp( N a if i) (1) We will use an algebraic reconstruction technique (ART) presented by Roh [Roh et al. 2004] to reconstruct 3-D shape of scattering objects following tradition short-baseline tomography approaches. Generally, when a ray passes through an object, the change in intensity can be modeled by Equation (1). I 0 is the original intensity of the ray and I is the resultant intensity after penetrating N layers inside the object. In this equation, a i means the distance penetrating at i th material of which absorption coefficient is f i as depicted in Figure 8. Equation (2) is the logarithmic expression of Equation (1). Note that Equation (2) can be rewritten for the j th ray as follows. i=1 h = AF (4) a 1 1 a 1 2 a 1 N a 1 a 2 1 a 2 2 a 2 N where A =... = a 2. a M 1 a M 2 a M N a M F R N, h R M, A R M N We note that Equation (5) can be used to get the next step value, f i(t+1), from the parameters at the current i th step. In this equation, t and λ are the step index and a coefficient related with convergence parameter, respectively. The values of g and h are the measured value from sensing and the calculation value from Equation (4) using f at the current step. As the iteration step, t, increases, the error term, g j -h j (t), decreases and f i(t) gets closer to the exact value. Finally, we can get approximated reconstruction result, f. Figure 9 and Figure 10 show 3D reconstruction results of two objects by this ART method using 8 images taken at different viewing angles. Thus, by combining the 4D light field capturing technique in previous section and this ART method, we expect that the whole shape of any translucent objects can be reconstructed instantaneously with multiple light sources. Also, it will allow a scan-free and fast CAT system when multiple X-ray sources are applied. f i(t + 1) = f i(t) + λ gj h j (t) N i (aj i )2 aj i (5) N h = log(i 0/I) = a if i (2) i=1 h j (t) = N i=1 a j i fi (3) (a) Captured images at 8 different viewing angles (b) 3D reconstrction results by ART method Figure 8: Projection model of a ray. Now, our problem is finding f i values which correspond to the density information within the reconstruction region. Equation (3) can be described using Equation (4) in matrix form. A matrix represents the projective geometry of rays calculated for the emitting position of rays, and the received position for a predetermined reconstruction region in which the object is assumed to be placed. The vector h is related to the sensed intensity values. In practice, our implementation of ART takes approximately five minutes for our data sets. Figure 9: Tomographic reconstruction for a dog-shape objects. 2.3 Getting a clear image through a scattering media Image Acquisition using a Pinhole Array In an imaging setup like Figure 11 rays emitted from a light source are scattered through a scattering media and the original and scattered rays are projected to a screen. The original rays emited from a light source are called direct component rays and additional rays through scattering media are called global component rays. When

6 (a) Captured images at 8 different viewing angles Figure 11: Diagram of capture setup. A diffuser is used to form an image through a pinhole array mask. A high-resolution camera captures the array of pinhole images in a single exposure. (b) 3D reconstrction results by ART method Figure 10: Tomographic reconstruction for a wine glass. there is an object inside of the scattering media, it looks blur because the global component rays which has very low frequency are overlappted with direct componentes rays at sensing. Thus, it is easily inferred that a clear image for the inside object can be obtained by separating global components out from the sensed image. A pinhole or lenslet array mask can be applied for this purpose when it is placed in front of a diffuser as shown in Figure 11. Figure 12 shows how the direct and global rays are formed through the pinhole or lenslet array. In the image formation, there exist two distinct regions, mixed region of the two components and pure global rays region. We will try to separate scattered components from sensed values at the mixed region by fitting global component values at the pure global region under each pinhole. As shown in Figure 13, the diffuser-plane image consists of a set of sharp peaks under each pinhole in the absence of any scattering media between the light source and diffuser. As shown on the right, the pinhole images extended, blurred patterns when a scattering objects is placed between the light source and camera; ulimately, the globally-scattered light causes samples to appear in mixed neighboring the central pixel under each pinhole. This blurring of the received image would be impossible to separate without the angular samples contributed by the pinhole array mask. As shown in Figure 13, the angular sample directly under each pinhole can be used to estimate a direct plus global transmission along the ray between a given pixel and the light source. Similarly, any non-zero neighboring pixels can be fully attributed to global illumination due to volumetric scattering. From Figure 13, we infer there are two regions in image under each pinhole; the first region consists of a mixed signal due to cross-talk between the direct and global components. The second region represents a pure global component. In the following section, we show a simple method for analyzing such imagery to estimate separate direct and global components for multiple-scattering media Direct-Global Separation via Angular Filtering In this section we consider direct-global separation for a 1-D sensor and a 2-D scene, while the results can be trivially extended to 2-D sensors and 3-D volumes. As shown in the second one of Fig- Figure 12: Image formation model for a multiple-scattering scene using a single pinhole. Note that the directly-transmitted ray impinges on a compact region below the pinhole, yet mixes with scattered global rays. The received signal located away from the directonly peak is due to scattered rays. ure 14, a single pinhole image is defined as two separate regions, a pure global component region and a region of mixed direct and global components. We represent the received intensity at each diffuser-plane pixel as, {L 0, L 1,..., L n}, when a scattering object is placed between the light source and the diffuser. The individual sensor values are modeled as L 0 = G 0 + D 0. L n = G n + D n, where {G n} and {D n} represent the underlying global and direct intensities measured in the sensor plane, respectively. As shown in Figure 14, a straightforward algorithm can be used to estimate the direct and global components received at each pinhole. First, we estimate a quadratic polynomial fit to the values outside the non-zero region of a pinhole image obtained with no scattering objects presents (in our system, the central region of 7 7 pixels is excluded). Note that in this region, L i G i. Afterwards, the polynomial model can be used to approximate values of the global components {G n} in the region directly below each pinhole; note that this region is subject to mixing and the global component must be approximated from the global-only region. Finally, a direct-only image can be estimated by subtracting the estimated global component for the central pixel under a pinhole from the measured value, such that D 0 L 0 G 0. 3 Evaluation General CAT systems use X-ray sources but I will experiment with visible light sources and translucent objects to reconstruct same imaging condition with using X-ray. X-ray images show absorbed (6)

7 Intensity Intensity Intensity Intensity Reference data captured without any object Pure global component region Mixed region by direct and global components Pure global component region L 3 L 2 L 1 L 0 L 1 L 2 L Figure 13: Pinhole images. (Left) Received images for each pinhole camera, for the case when no scattering object is present. (Right) Pinhole images when a scattering object is present Known global component Unknown global components Known global component 100 amount of X-ray when it is transmit through a media in 2D spatial domain. Such imaging condition can be imitated with translucent media when visible rays are transmitted through it. The single shot CAT technique will be evaluated by the quality of 3D generation and imaging time which is a key factor for real-time 3D view generation. Also, successful implementation of a wearable CAT system will be evaluated as another main goal of this thesis. 4 Required resources To imitate X-ray imaging, translucent objects and LED sources will be used. I already have some translucent objects like wine glasses and toys and it is easy to find such objects around us. Multiple LED sources have been prepared as well and they are aligned in a metal frame. To implement 4D light field capturing technique in large angular range, we need a pinhole array film or a lenslet array in big size. Now, we have some of them but might be required to make new one to acquire higher spatial resolution. A normal DSLR digital camera will be used to get images. For wearable application, we need to compose newly the mentioned stuff for a part of human body. In such setup, we will be required to use an imaging sensor and a lenslet array in small size. Small Dragonfly cameras will be good enough for the purpose and we have a small lenslet array in 1in 1in. If we need to get new lenslet arrays, we can purchase a proper one from AOA Company in Cambridge. 5 Timeline References ATCHESON, B., IHRKE, I., HEIDRICH, W., TEVS, A., BRADLEY, D., MAGNOR, M., AND SEIDEL, H. P Time-resolved 3d capture of non-stationary gas flows. ACM Transactions on Graphics. GORTLER, S. J., GRZESZCZUK, R., SZELISKI, R., AND COHEN, M. F The lumigraph. SIGGRAPH. GU, J., NAYAR, S., GRINSPUN, E., BELHUMEUR, P., AND RA- MAMOORTHI, R Compressive structured light for recovering inhomogeneous participating media. In European Conference on Computer Vision (ECCV). ISAKSEN, A., MCMILLAN, L., AND GORTLER, S Dynamically reparameterized light fields. Proc. SIGGRAPH G 3 G 2 G 1 G 0 G 1 G 2 G Computed Direct components 0 D 3 D 2 D 1 D 0 D 1 D 2 D Pixel (p) Figure 14: (From top to bottom) First, a 1-D sensor image for a single LED illuminating a diffuser with no object present. Second, an image with a scattering object present. Third, measured (black) and estimated polynomial fit (red) for global-only component. Fourth, the direct-only image formed by subtracting the second from the third. JENSEN, H., MARSCHNER, S., LEBOY, M., AND HANRAHAN, P A practical model for subsurface light transport. SIG- GRAPH, LANMAN, D., RASKAR, R., AGRAWAL, A., AND TAUBIN, G Modeling and capturing 3d occluders. SIGGRAPH Asia LEVOY, M., AND HANRAHAN, P Light field rendering. SIGGRAPH96, LEVOY, M., CHEN, B., VAISH, V., HOROWITZ, M., MC- DOWALL, M., AND BOLAS, M Synthetic aperture confocal imaging. ACM Transactions on Graphics 23, NARASIMHAN, S. G., NAYAR, S. K., SUN, B., AND KOPPAL, S. J Structured light in scattering media. In Proc. IEEE ICCV 1, NASU, O., HIURA, S., AND SATO, K Analysis of light transport based on the separation of direct and indirect omponents. IEEE Intl. Workshop on Projector-Comera Systems(ProCams 2007). NAYAR, S., KRICHNAN, G., GROSSBERG, M., AND RASKAR, R Fast separation of direct and global components of a scene using high frequency illumination. Transactions on Graphics 12, 3,

8 NG, R., LEBOY, M., BREDIF, M., DUVAL, M., HOROWITZ, G., AND HANRAHAN, P Light field photography with a hand-held plenoptic camera. Tech. rep, Stanford University. ROH, Y. J., PARK, W. S., CHO, H. S., AND JEON, H. J Implementation of uniform and simultaneous art for 3-d reconstruction in an x-ray imaging system. IEEE Proceedings, Vision, Image and Signal Processing 151. ROSEN, J., AND ABOOKASIS, D Noninvasive optical imaging by speckle ensemble. Optics Letters 29, 3. SUN, B., RAMAMOORTHI, R., NARASIMHAN, S. G., AND NA- YAR, S. K A practical analytic single scattering model for real time rendering. ACM Transactions on Graphics, TRIFONOV, B., BRADLEY, D., AND HEIDRICH, W Tomographic reconstruction of transparent objects. Eurographics Symposium on Rendering. TUCHIN, V Tissue optics. SPIE. VAISH, V., WILBURN, B., JOSHI, N., AND LEVOY, M Using plane + parallax for calibrating dense camera arrays. Proc. Conf. Computer Vision and Pattern Recognition. VEERARAGHAVAN, A., RASKAR, R., AGRAWAL, A., MOHAN, A., AND TUMBLIN, J Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM SIG- GRAPH 2007.

9 Figure 15: Timeline

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter

Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Acquiring 4D Light Fields of Self-Luminous Light Sources Using Programmable Filter Motohiro Nakamura 1, Takahiro Okabe 1, and Hendrik P. A. Lensch 2 1 Kyushu Institute of Technology 2 Tübingen University

More information

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography

Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography Mitsubishi Electric Research Labs (MERL) Computational Cameras Computational Cameras: Exploiting Spatial- Angular Temporal Tradeoffs in Photography Amit Agrawal Mitsubishi Electric Research Labs (MERL)

More information

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007 Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 The Plenoptic Function Figure by Leonard McMillan Q: What is the set of all things that we can ever see? A: The

More information

Hemispherical confocal imaging using turtleback reflector

Hemispherical confocal imaging using turtleback reflector Hemispherical confocal imaging using turtleback reflector The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher

More information

Coding and Modulation in Cameras

Coding and Modulation in Cameras Mitsubishi Electric Research Laboratories Raskar 2007 Coding and Modulation in Cameras Ramesh Raskar with Ashok Veeraraghavan, Amit Agrawal, Jack Tumblin, Ankit Mohan Mitsubishi Electric Research Labs

More information

Focal stacks and lightfields

Focal stacks and lightfields Focal stacks and lightfields http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 11 Course announcements Homework 3 is out. - Due October 12 th.

More information

http://www.diva-portal.org This is the published version of a paper presented at 2018 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), Stockholm Helsinki Stockholm,

More information

Capturing and View-Dependent Rendering of Billboard Models

Capturing and View-Dependent Rendering of Billboard Models Capturing and View-Dependent Rendering of Billboard Models Oliver Le, Anusheel Bhushan, Pablo Diaz-Gutierrez and M. Gopi Computer Graphics Lab University of California, Irvine Abstract. In this paper,

More information

MAS.963 Special Topics: Computational Camera and Photography

MAS.963 Special Topics: Computational Camera and Photography MIT OpenCourseWare http://ocw.mit.edu MAS.963 Special Topics: Computational Camera and Photography Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

CSCI 1290: Comp Photo

CSCI 1290: Comp Photo CSCI 1290: Comp Photo Fall 2018 @ Brown University James Tompkin Many slides thanks to James Hays old CS 129 course, along with all of its acknowledgements. What do we see? 3D world 2D image Point of observation

More information

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field Traditional Image Generation Course 10 Realistic Materials in Computer Graphics Surfaces + BRDFs Reflectance Fields USC Institute for Creative Technologies Volumetric scattering, density fields, phase

More information

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University

Global Illumination CS334. Daniel G. Aliaga Department of Computer Science Purdue University Global Illumination CS334 Daniel G. Aliaga Department of Computer Science Purdue University Recall: Lighting and Shading Light sources Point light Models an omnidirectional light source (e.g., a bulb)

More information

Surround Structured Lighting for Full Object Scanning

Surround Structured Lighting for Full Object Scanning Surround Structured Lighting for Full Object Scanning Douglas Lanman, Daniel Crispell, and Gabriel Taubin Brown University, Dept. of Engineering August 21, 2007 1 Outline Introduction and Related Work

More information

Acquisition and Visualization of Colored 3D Objects

Acquisition and Visualization of Colored 3D Objects Acquisition and Visualization of Colored 3D Objects Kari Pulli Stanford University Stanford, CA, U.S.A kapu@cs.stanford.edu Habib Abi-Rached, Tom Duchamp, Linda G. Shapiro and Werner Stuetzle University

More information

Generating 5D Light Fields in Scattering Media for Representing 3D Images

Generating 5D Light Fields in Scattering Media for Representing 3D Images Generating 5D Light Fields in Scattering Media for Representing 3D Images Eri Yuasa Fumihiko Sakaue Jun Sato Nagoya Institute of Technology yuasa@cv.nitech.ac.jp, sakaue@nitech.ac.jp, junsato@nitech.ac.jp

More information

Modeling Light. Michal Havlik

Modeling Light. Michal Havlik Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Spring 2010 What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of

More information

Modeling Light. Slides from Alexei A. Efros and others

Modeling Light. Slides from Alexei A. Efros and others Project 3 Results http://www.cs.brown.edu/courses/cs129/results/proj3/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj3/damoreno/ http://www.cs.brown.edu/courses/cs129/results/proj3/taox/ Stereo

More information

Stereo and structured light

Stereo and structured light Stereo and structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 20 Course announcements Homework 5 is still ongoing. - Make sure

More information

Computational Photography

Computational Photography Computational Photography Photography and Imaging Michael S. Brown Brown - 1 Part 1 Overview Photography Preliminaries Traditional Film Imaging (Camera) Part 2 General Imaging 5D Plenoptic Function (McMillan)

More information

Plenoptic camera and its Applications

Plenoptic camera and its Applications Aum Sri Sairam Plenoptic camera and its Applications Agenda: 1. Introduction 2. Single lens stereo design 3. Plenoptic camera design 4. Depth estimation 5. Synthetic refocusing 6. Fourier slicing 7. References

More information

Structured light , , Computational Photography Fall 2017, Lecture 27

Structured light , , Computational Photography Fall 2017, Lecture 27 Structured light http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 27 Course announcements Homework 5 has been graded. - Mean: 129. - Median:

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011 Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2011 What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power

More information

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch CSCI 420 Computer Graphics Lecture 18 Global Illumination Jernej Barbic University of Southern California BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch. 13.4-13.5] 1 Global Illumination

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows. CSCI 420 Computer Graphics Lecture 18 Global Illumination Jernej Barbic University of Southern California BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Angel Ch. 11] 1 Global Illumination

More information

Hemispherical Confocal Imaging

Hemispherical Confocal Imaging IPSJ Transactions on Computer Vision and Applications Vol. 3 222 235 (Dec. 2011) Research Paper Hemispherical Confocal Imaging Seiichi Tagawa, 1 Yasuhiro Mukaigawa, 1, 2 Jaewon Kim, 2, 3 Ramesh Raskar,

More information

Light Field Occlusion Removal

Light Field Occlusion Removal Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image

More information

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection 3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection Peter Eisert, Eckehard Steinbach, and Bernd Girod Telecommunications Laboratory, University of Erlangen-Nuremberg Cauerstrasse 7,

More information

Computational Photography

Computational Photography End of Semester is the last lecture of new material Quiz on Friday 4/30 Sample problems are posted on website Computational Photography Final Project Presentations Wednesday May 12 1-5pm, CII 4040 Attendance

More information

More and More on Light Fields. Last Lecture

More and More on Light Fields. Last Lecture More and More on Light Fields Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 4 Last Lecture Re-review with emphasis on radiometry Mosaics & Quicktime VR The Plenoptic function The main

More information

Depth Estimation with a Plenoptic Camera

Depth Estimation with a Plenoptic Camera Depth Estimation with a Plenoptic Camera Steven P. Carpenter 1 Auburn University, Auburn, AL, 36849 The plenoptic camera is a tool capable of recording significantly more data concerning a particular image

More information

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows. CSCI 480 Computer Graphics Lecture 18 Global Illumination BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch. 13.4-13.5] March 28, 2012 Jernej Barbic University of Southern California

More information

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005 Real Time Rendering CS 563 Advanced Topics in Computer Graphics Songxiang Gu Jan, 31, 2005 Introduction Polygon based rendering Phong modeling Texture mapping Opengl, Directx Point based rendering VTK

More information

Real-Time Video-Based Rendering from Multiple Cameras

Real-Time Video-Based Rendering from Multiple Cameras Real-Time Video-Based Rendering from Multiple Cameras Vincent Nozick Hideo Saito Graduate School of Science and Technology, Keio University, Japan E-mail: {nozick,saito}@ozawa.ics.keio.ac.jp Abstract In

More information

Computational Photography

Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 2010 Today Light fields Introduction Light fields Signal processing analysis Light field cameras Application Introduction Pinhole camera

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about Image-Based Modeling and Rendering Image-Based Modeling and Rendering MIT EECS 6.837 Frédo Durand and Seth Teller 1 Some slides courtesy of Leonard McMillan, Wojciech Matusik, Byong Mok Oh, Max Chen 2

More information

COHERENCE AND INTERFERENCE

COHERENCE AND INTERFERENCE COHERENCE AND INTERFERENCE - An interference experiment makes use of coherent waves. The phase shift (Δφ tot ) between the two coherent waves that interfere at any point of screen (where one observes the

More information

Computational Imaging for Self-Driving Vehicles

Computational Imaging for Self-Driving Vehicles CVPR 2018 Computational Imaging for Self-Driving Vehicles Jan Kautz--------Ramesh Raskar--------Achuta Kadambi--------Guy Satat Computational Imaging for Self-Driving Vehicles Jan Kautz--------Ramesh Raskar--------Achuta

More information

(De) Focusing on Global Light Transport for Active Scene Recovery

(De) Focusing on Global Light Transport for Active Scene Recovery (De) Focusing on Global Light Transport for Active Scene Recovery Mohit Gupta, Yuandong Tian, Srinivasa G. Narasimhan and Li Zhang Robotics Institute, Carnegie Mellon University, USA Computer Science Department,

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Ko Nishino, Zhengyou Zhang and Katsushi Ikeuchi Dept. of Info. Science, Grad.

More information

Jinwei Gu. Ph.D. in Computer Science Dissertation: Measurement, Modeling, and Synthesis of Time-Varying Appearance of Natural

Jinwei Gu. Ph.D. in Computer Science Dissertation: Measurement, Modeling, and Synthesis of Time-Varying Appearance of Natural Jinwei Gu CONTACT Department of Computer Science Mobile: (917) 292-9361 450 Computer Science Bldg. Phone: (212)-939-7091 (office) Columbia University Fax: (212) 666-0140 500 West 120 Street New York, NY

More information

Structure from Motion and Multi- view Geometry. Last lecture

Structure from Motion and Multi- view Geometry. Last lecture Structure from Motion and Multi- view Geometry Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 5 Last lecture S. J. Gortler, R. Grzeszczuk, R. Szeliski,M. F. Cohen The Lumigraph, SIGGRAPH,

More information

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Hybrid Rendering for Collaborative, Immersive Virtual Environments Hybrid Rendering for Collaborative, Immersive Virtual Environments Stephan Würmlin wuermlin@inf.ethz.ch Outline! Rendering techniques GBR, IBR and HR! From images to models! Novel view generation! Putting

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Image Formation. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Image Formation. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico Image Formation Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico 1 Objectives Fundamental imaging notions Physical basis for image formation

More information

Techniques of Noninvasive Optical Tomographic Imaging

Techniques of Noninvasive Optical Tomographic Imaging Techniques of Noninvasive Optical Tomographic Imaging Joseph Rosen*, David Abookasis and Mark Gokhler Ben-Gurion University of the Negev Department of Electrical and Computer Engineering P. O. Box 653,

More information

Announcements. Light. Properties of light. Light. Project status reports on Wednesday. Readings. Today. Readings Szeliski, 2.2, 2.3.

Announcements. Light. Properties of light. Light. Project status reports on Wednesday. Readings. Today. Readings Szeliski, 2.2, 2.3. Announcements Project status reports on Wednesday prepare 5 minute ppt presentation should contain: problem statement (1 slide) description of approach (1 slide) some images (1 slide) current status +

More information

Modeling Light. Michal Havlik

Modeling Light. Michal Havlik Modeling Light Michal Havlik 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power

More information

Re-live the Movie Matrix : From Harry Nyquist to Image-Based Rendering. Tsuhan Chen Carnegie Mellon University Pittsburgh, USA

Re-live the Movie Matrix : From Harry Nyquist to Image-Based Rendering. Tsuhan Chen Carnegie Mellon University Pittsburgh, USA Re-live the Movie Matrix : From Harry Nyquist to Image-Based Rendering Tsuhan Chen tsuhan@cmu.edu Carnegie Mellon University Pittsburgh, USA Some History IEEE Multimedia Signal Processing (MMSP) Technical

More information

Radiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University

Radiance Photography. Todor Georgiev Adobe Systems. Andrew Lumsdaine Indiana University Radiance Photography Todor Georgiev Adobe Systems Andrew Lumsdaine Indiana University Course Goals Overview of radiance (aka lightfield) photography Mathematical treatment of theory and computation Hands

More information

A Frequency Analysis of Light Transport

A Frequency Analysis of Light Transport A Frequency Analysis of Light Transport Frédo Durand MIT CSAIL With Nicolas Holzschuch, Cyril Soler, Eric Chan & Francois Sillion Artis Gravir/Imag-Inria & MIT CSAIL Our research 3D rendering Light transport

More information

Estimating the surface normal of artwork using a DLP projector

Estimating the surface normal of artwork using a DLP projector Estimating the surface normal of artwork using a DLP projector KOICHI TAKASE 1 AND ROY S. BERNS 2 1 TOPPAN Printing co., ltd. 2 Munsell Color Science Laboratory, Rochester Institute of Technology Summary:

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

View Synthesis for Multiview Video Compression

View Synthesis for Multiview Video Compression View Synthesis for Multiview Video Compression Emin Martinian, Alexander Behrens, Jun Xin, and Anthony Vetro email:{martinian,jxin,avetro}@merl.com, behrens@tnt.uni-hannover.de Mitsubishi Electric Research

More information

DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION. Department of Artificial Intelligence Kyushu Institute of Technology

DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION. Department of Artificial Intelligence Kyushu Institute of Technology DIFFUSE-SPECULAR SEPARATION OF MULTI-VIEW IMAGES UNDER VARYING ILLUMINATION Kouki Takechi Takahiro Okabe Department of Artificial Intelligence Kyushu Institute of Technology ABSTRACT Separating diffuse

More information

Introduction to Computer Graphics with WebGL

Introduction to Computer Graphics with WebGL Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science Laboratory University of New Mexico Image Formation

More information

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN WHAT IS A LIGHT FIELD? Light field seems to have turned into a catch-all term for many advanced camera/display technologies. WHAT IS A LIGHT FIELD?

More information

Corona Sky Corona Sun Corona Light Create Camera About

Corona Sky Corona Sun Corona Light Create Camera About Plugin menu Corona Sky creates Sky object with attached Corona Sky tag Corona Sun creates Corona Sun object Corona Light creates Corona Light object Create Camera creates Camera with attached Corona Camera

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Plenoptic Cameras. Bastian Goldlücke, Oliver Klehm, Sven Wanner, and Elmar Eisemann. 5.1 Introduction

Plenoptic Cameras. Bastian Goldlücke, Oliver Klehm, Sven Wanner, and Elmar Eisemann. 5.1 Introduction Plenoptic Cameras Bastian Goldlücke, Oliver Klehm, Sven Wanner, and Elmar Eisemann 5.1 Introduction The light field, as defined by Gershun in 1936 [Gershun 36] describes the radiance traveling in every

More information

Subsurface Scattering & Complex Material Properties

Subsurface Scattering & Complex Material Properties Last Time? Subsurface Scattering & What is a Pixel? Aliasing Fourier Analysis Sampling & Reconstruction Mip maps Reading for Today: Optional Reading for Today Correlated Multi-Jittered Sampling, Andrew

More information

Interactive Vizualisation of Complex Real-World Light Sources

Interactive Vizualisation of Complex Real-World Light Sources Interactive Vizualisation of Complex Real-World Light Sources Xavier Granier(1-2) Michael Goesele(3) (2) (3) Wolgang Heidrich Hans-Peter Seidel (1) (2) (3) IPARLA/INRIA LaBRI (Bordeaux) University of British

More information

Some books on linear algebra

Some books on linear algebra Some books on linear algebra Finite Dimensional Vector Spaces, Paul R. Halmos, 1947 Linear Algebra, Serge Lang, 2004 Linear Algebra and its Applications, Gilbert Strang, 1988 Matrix Computation, Gene H.

More information

View Synthesis for Multiview Video Compression

View Synthesis for Multiview Video Compression MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com View Synthesis for Multiview Video Compression Emin Martinian, Alexander Behrens, Jun Xin, and Anthony Vetro TR2006-035 April 2006 Abstract

More information

Compensating for Motion During Direct-Global Separation

Compensating for Motion During Direct-Global Separation Compensating for Motion During Direct-Global Separation Supreeth Achar, Stephen T. Nuske, and Srinivasa G. Narasimhan Robotics Institute, Carnegie Mellon University Abstract Separating the direct and global

More information

Surface Normal Deconvolution: Photometric Stereo for Optically Thick Translucent Objects

Surface Normal Deconvolution: Photometric Stereo for Optically Thick Translucent Objects Surface Normal Deconvolution: Photometric Stereo for Optically Thick Translucent Objects Chika Inoshita 1, Yasuhiro Mukaigawa 2 Yasuyuki Matsushita 3, and Yasushi Yagi 1 1 Osaka University 2 Nara Institute

More information

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

Interactive Light Field Editing and Compositing

Interactive Light Field Editing and Compositing Interactive Light Field Editing and Compositing Billy Chen Daniel Horn* Gernot Ziegler Hendrik P. A. Lensch* Stanford University MPI Informatik (c) (d) Figure 1: Our system enables a user to interactively

More information

Jingyi Yu CISC 849. Department of Computer and Information Science

Jingyi Yu CISC 849. Department of Computer and Information Science Digital Photography and Videos Jingyi Yu CISC 849 Light Fields, Lumigraph, and Image-based Rendering Pinhole Camera A camera captures a set of rays A pinhole camera captures a set of rays passing through

More information

Representing the World

Representing the World Table of Contents Representing the World...1 Sensory Transducers...1 The Lateral Geniculate Nucleus (LGN)... 2 Areas V1 to V5 the Visual Cortex... 2 Computer Vision... 3 Intensity Images... 3 Image Focusing...

More information

ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning

ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 26 February 2009 Figure 1: Structured

More information

Real-Time Video- Based Modeling and Rendering of 3D Scenes

Real-Time Video- Based Modeling and Rendering of 3D Scenes Image-Based Modeling, Rendering, and Lighting Real-Time Video- Based Modeling and Rendering of 3D Scenes Takeshi Naemura Stanford University Junji Tago and Hiroshi Harashima University of Tokyo In research

More information

EECS 487: Interactive Computer Graphics

EECS 487: Interactive Computer Graphics Ray Tracing EECS 487: Interactive Computer Graphics Lecture 29: Distributed Ray Tracing Introduction and context ray casting Recursive ray tracing shadows reflection refraction Ray tracing implementation

More information

Optical Active 3D Scanning. Gianpaolo Palma

Optical Active 3D Scanning. Gianpaolo Palma Optical Active 3D Scanning Gianpaolo Palma 3D Scanning Taxonomy SHAPE ACQUISTION CONTACT NO-CONTACT NO DESTRUCTIVE DESTRUCTIVE X-RAY MAGNETIC OPTICAL ACOUSTIC CMM ROBOTIC GANTRY SLICING ACTIVE PASSIVE

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

HIGH-SPEED THEE-DIMENSIONAL TOMOGRAPHIC IMAGING OF FRAGMENTS AND PRECISE STATISTICS FROM AN AUTOMATED ANALYSIS

HIGH-SPEED THEE-DIMENSIONAL TOMOGRAPHIC IMAGING OF FRAGMENTS AND PRECISE STATISTICS FROM AN AUTOMATED ANALYSIS 23 RD INTERNATIONAL SYMPOSIUM ON BALLISTICS TARRAGONA, SPAIN 16-20 APRIL 2007 HIGH-SPEED THEE-DIMENSIONAL TOMOGRAPHIC IMAGING OF FRAGMENTS AND PRECISE STATISTICS FROM AN AUTOMATED ANALYSIS P. Helberg 1,

More information

CS635 Spring Department of Computer Science Purdue University

CS635 Spring Department of Computer Science Purdue University Light Transport CS635 Spring 2010 Daniel G Aliaga Daniel G. Aliaga Department of Computer Science Purdue University Topics Local and GlobalIllumination Models Helmholtz Reciprocity Dual Photography/Light

More information

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction Computer Graphics -based rendering Output Michael F. Cohen Microsoft Research Synthetic Camera Model Computer Vision Combined Output Output Model Real Scene Synthetic Camera Model Real Cameras Real Scene

More information

3D Shape and Indirect Appearance By Structured Light Transport

3D Shape and Indirect Appearance By Structured Light Transport 3D Shape and Indirect Appearance By Structured Light Transport CVPR 2014 - Best paper honorable mention Matthew O Toole, John Mather, Kiriakos N. Kutulakos Department of Computer Science University of

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

Title: The Future of Photography is Computational Photography. Subtitle: 100 years after invention, Integral Photography becomes feasible

Title: The Future of Photography is Computational Photography. Subtitle: 100 years after invention, Integral Photography becomes feasible Title: The Future of Photography is Computational Photography Subtitle: 100 years after invention, Integral Photography becomes feasible Adobe customers are creative and demanding. They expect to use our

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 4 Jan. 24 th, 2019 Slides from Dr. Shishir K Shah and Frank (Qingzhong) Liu Digital Image Processing COSC 6380/4393 TA - Office: PGH 231 (Update) Shikha

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Reading for Today A Practical Model for Subsurface Light Transport, Jensen, Marschner, Levoy, & Hanrahan, SIGGRAPH 2001 Participating Media Measuring BRDFs

More information

S2 Science EM Spectrum Revision Notes --------------------------------------------------------------------------------------------------------------------------------- What is light? Light is a form of

More information

Ray-Tracing. Misha Kazhdan

Ray-Tracing. Misha Kazhdan Ray-Tracing Misha Kazhdan Ray-Tracing In graphics, we often represent the surface of a 3D shape by a set of triangles. Goal: Ray-Tracing Take a collection of triangles representing a 3D scene and render

More information

Towards Passive 6D Reflectance Field Displays

Towards Passive 6D Reflectance Field Displays Towards Passive 6D Reflectance Field Displays Martin Fuchs MPI Informatik Ramesh Raskar Mitsubishi Electric Research Laboratories Hans-Peter Seidel MPI Informatik Hendrik P. A. Lensch MPI Informatik Abstract

More information

Refractive Shape from Light Field Distortion

Refractive Shape from Light Field Distortion Refractive Shape from Light Field Distortion Gordon Wetzstein 1 David Roodnick 1 Wolfgang Heidrich 1 Ramesh Raskar 2 1 University of British Columbia 2 MIT Media Lab Abstract Acquiring transparent, refractive

More information

Surround Structured Lighting for Full Object Scanning

Surround Structured Lighting for Full Object Scanning Surround Structured Lighting for Full Object Scanning Douglas Lanman, Daniel Crispell, and Gabriel Taubin Department of Engineering, Brown University {dlanman,daniel crispell,taubin}@brown.edu Abstract

More information

A Theory of Plenoptic Multiplexing

A Theory of Plenoptic Multiplexing A Theory of Plenoptic Multiplexing Ivo Ihrke ivoihrke@cs.ubc.ca The University of British Columbia ( ) now at Saarland University Gordon Wetzstein wetzste1@cs.ubc.ca The University of British Columbia

More information

Shading of a computer-generated hologram by zone plate modulation

Shading of a computer-generated hologram by zone plate modulation Shading of a computer-generated hologram by zone plate modulation Takayuki Kurihara * and Yasuhiro Takaki Institute of Engineering, Tokyo University of Agriculture and Technology, 2-24-16 Naka-cho, Koganei,Tokyo

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information