Color Alignment in Texture Mapping of Images under Point Light Source and General Lighting Condition

Size: px
Start display at page:

Download "Color Alignment in Texture Mapping of Images under Point Light Source and General Lighting Condition"

Transcription

1 Color Alignment in Texture Mapping of Images under Point Light Source and General Lighting Condition Hiroki Unten Graduate School of Information Science and Technology The University of Tokyo Katsushi Ikeuchi Graduate School of Interfaculty Initiative in Information Studies The University of Tokyo Abstract Recently, creating digital contents through measurements of the real world has become an important technique in the field of virtual reality, digital preservation of cultural heritages and so forth One of the effective methods for creating such digital contents is texture mapping, which maps color images on a 3D geometric model This method has two problems: geometric and photometric The geometric problem lies mostly in how to estimate the camera position and rotation relative to the 3D position of the target object The photometric problem deals with color consistencies between input textures Most studies on texture mapping focus on the geometric problem, while only a few works deal with the photometric problem In this paper we propose two novel methods for color alignment in texture mapping One method is based on the fact that chromaticity of images is independent from their geometric data The other method utilizes the basis image of a point light source on an illumination sphere In both methods, the pseudo-albedo is estimated and the color consistency between the input images is accomplished 1 Introduction Virtual reality systems are expected to expand in applications for Virtual Mall, Virtual Museum, Entertainment and so forth There are numerous researches on virtual reality systems, particularly on an effective method to create virtual reality models Currently, most of the VR-models are manually developed and it takes much cost and time to create models Automation of modeling is thus strongly desired and methods to create digital contents through measurements of the real world have been developed One of the most important steps in model creation is texture mapping Texture mapping[1, 2, 3] is a method to map color images onto a 3D shape model; the method provides a realistic appearance to the model An efficient method for texture mapping has two major problems: Geometric problem Photometric problem The geometric problem is concerned with the estimation of the camera position and rotation relative to the 3D position of target objects Most studies of texture mapping have focused on this problem Neugebauer and Klein [4] utilize the outline of the object and the intensities of images to align textures and 3D geometric models Kurazume et al [5, 6] proposed an automatic method to align textures to 3D geometric models Their method detects edges from reflectance which is available in a measurement by laser range sensors, and determines camera parameters by minimizing the errors between reflectance edges and texture edges The photometric problem has to do with eliminating color discontinuities between textures Image intensity in textures is affected by illumination conditions and the photometric and geometric properties of a object The appearance of textures measured under different conditions is quite different; that difference leads to color discontinuities when those textures are mapped to a 3D geometric model In order to eliminate the discontinuities, color alignment between textures is necessary Neugebauer and Klein[4] calculated weighted-average between multiple textures based on sampling density, smooth transition, and elimination of outliers Heidrich Lensch et al [7] utilized an alpha blending to obtain continuities between textures These averagebased methods have difficulty in aligning color images under different illumination conditions Another way to align colors of textures is to recover the illumination distributions Marschner and Greenberg[8] recovered illumination distribution and re-lighted the input textures They treated an input image as the linear combination of basis images under basis lights on illumination sphere However, this method is not applicable to color alignment in texture mapping since this method requires /04 $2000 (C) 2004 IEEE

2 reflectance of object Beauchesne and Roy[9] proposed a method for aligning texture colors by re-lighting input textures under a common lighting condition This method requires perfect normals for each point and ignores self shadow effects In this paper, we propose novel methods for color alignment in texture mapping We assume that the property of the object surface is lambertian We will allow illumination condition (the number of light sources and the directions) to change among image acquisitions First, we will consider a method based on chromaticity segmentation Then, we will derive a method to utilize basis images of point light source on illumination sphere The latter method is applicable to the images containing self-shadows because shadows and shadings are calculated by rendering In both methods, pseudo-albedo is estimated and thus the color consistency between the input images is accomplished In the context of object modeling, measuring albedo is important We can generate images under arbitrary illumination conditions from 3D geometric model and albedo by rendering The remainder of the paper is organized as follows Section 2 and Section 3 present a color alignment based on chromaticity consistency (method I) and color alignment based on illumination sphere (method II), respectively In Section 4, we demonstrate the efficiency of our methods by describing our experiments Finally, we conclude the paper in Section 5 2 Method I :Color Alignment based on chromaticity consistency Image intensity of a lambertian object under a point light source can be written as: I x (c) =L(c)S x (c)cosθ x, (1) where c = {r, g, b}, x is pixel index, I x (c) is image intensity at x, L(c) is illumination color, S x (c) is surface color(albedo), and θ x is the angle between the normal direction at x and illumination direction We refer to L(c)S x (c) as pseudo-albedo Figure 1 shows an overview of pseudo-albedo estimation We estimate the illumination direction(block (b) in Figure 1) by utilizing the normal direction(block (a) in Figure 1) of the object calculated from 3D geometric model Image chromaticity(block (c) in Figure 1) is defined as I x,c = I x(c) c I (2) x(c) Considering equation (1) and (2), I x,c depends only on albedo and illumination color We assume that the regions Figure 1: Overview of pseudo-albedo estimation with the same chromaticity have the same albedo We refer to a region which has one albedo as A Let the normal direction on x and the illumination direction be n(x) and L, respectively Because S x (c) is a constant(s const (c)) ina, the following equation is derived I x (c) =L(c)S const (c)cosθ x = kn(x) L, (3) where k is a scalar constant By substituting I x (c) and n(x) in the region A into equation (3), we can estimate L by least-square method and acquire cosθ x (Block (d) in Figure 1) from L If I x (c) and cosθ x are known, pseudo-albedo of each pixel is independently estimated as L(c)S x (c) = I x (c)/cosθ x from equation (1) However, if a 3D model is not perfectly measured and its surface orientations are unreliable, the obtained pseudo-albedo values on the pixels with unreliable surface orientations are also unreliable Therefore, in our method, we do not independently estimate pseudo-albedo of each pixel but estimate pseudo-albedo of each region with the same chromaticity by utilizing that image chromaticity is independent from the model of geometry We introduce normalized illumination color(l c = L(c)/L) and normalized albedo (S x,c = S x (c)/s x ), where L = c L(c) and S x = c S x(c) are total illumination color and total albedo, respectively By substituting these into (1) and (2), we get L(c)S x (c) = I x,c LS x L c S x,c c = I x,c T x, (4) where T x = LS x c L cs x,c We make the following assumption on the target object: total albedo is the same in the region where normalized albedo is the same; in other words, total albedo is not directly dependent on the position And S x is represented as a function of I x,c because the illumination color and direction are constant in an image From the discussion above, /04 $2000 (C) 2004 IEEE

3 Figure 2: Overview of chromaticity-t map estimation T x is represented as a function of I x,c, and referred to as chromaticity-t map(t (I r,i b,i g )) (Block (e) in Figure 1) Note that the chromaticity-t map does not directly depend on x, but rather depends on I x,c We make the chromaticity-t map by quantizing chromaticity space and estimating T value on each point Note that T x = c I x(c)/cosθ x from equation (1) and (4) Figure 2 shows the outline of process for generating chromaticity-t map We calculate T x = c I x(c)/cosθ x for each pixel(block (a) in Figure 2), and vote the value to I x,c on chromaticity space(block (b) in Figure2) Then, we calculate a histogram of c I x(c)/cosθ x at each point on chromaticity space from all the pixel in the image Chromaticity-T map(block (d) in Figure 2) is defined as the median of the histogram(block (c) in Figure 2) From the chromaticity-t map, we can calculate pseudo-albedo by equation (4) Even though our assumption on target object is not universally applicable, it gives a sufficient chromaticity T-map for most objects Once pseudo-albedo is estimated for input images from different viewpoints, we can get a textured 3D geometric model with color consistency by mapping the pseudoalbedo to a 3D geometric model If the illumination colors are known by the white calibration board, etc, we can estimate the albedo from the pseudo-albedo 3 Method II :Color Alignment based on Illumination sphere The previous method can determine albedo distributions vary easily and is quite handy On the other hand, the method has difficulty in handling curved surfaces and extended light sources For example, it is difficult to apply the method to images under multiple light sources In this section, we will describe how we extend the method to be able to handle those cases by using the concept of illumination sphere First, we introduce the illumination sphere an infinitely far light sphere in the scene In method II, we restrict illumination to distant light sources Each direction of the illumi- Figure 3: Overview of method II nation sphere represents the direction of the light source and the intensity of the illumination sphere represents the intensity of the light source We approximate the illumination sphere by a series of distant point light sources on a sphere Let A m be a series of rendered images of 3D geometric model under the point light sources In the rendering process, we use a constant lambertian parameter for the model We refer to this series of images as basis images(block (a1) and (a2) in Figure 3) Let I 1 (x), I 2 (x) be images acquired by an image sensor(input images); the following equation is then derived I n (x) = S(x)(a 1 n A1 n (x)+am n Am n (x)+am n AM n (x)) = S(x)L n (x), (5) where m = 1, 2M is an index of a basis image, n =1, 2 is an index of an input image, x is an index of pixel, and S(x) is albedo at x L n (x) contains the shade and shadow information of the input image, and we refer to L n (x) as irradiance image If we get an irradiance image, we can estimate albedo from (5) Because irradiance image represent the shading and shadow, we can estimate albedo at the position of self-shadow Second, we define k(x) =I 1 (x)/i 2 (x)(block (b) in Figure 3) and from equation (5) the following equation is derived k(x) = a1 1 A1 1 (x)+ am 1 Am 1 (x)+ + am 1 AM 1 (x) a 1 2 A1 2 (x)+ + am 2 Am 2 (x)+ + am 2 AM 2 (x) (6) The following equation is derived(block (c) in Figure 3) from equation (6) Ua =0, (7) /04 $2000 (C) 2004 IEEE

4 where A 1 1 (1) AM 1 (1) k(1)a1 2 (1) k(1)am 2 (1) U = A 1 1 (n) AM 1 (n) k(n)a1 2 (n) k(n)am 2 (n) (8) A 1 1 (N) AM 1 (N) k(n)a1 2 (N) k(n)am 2 (N) a = ( a 1 1 am 1 a1 2 am 2 ) t (9) Because U can be known from input images and basis images, we can determine a m n with scale ambiguity Then, we get L 1 (x) and L 2 (x)(block (d1) and (d2) in Figure 3) and considering equation (5), we can estimate albedo for each input image(block (e1) and (e2) in Figure 3) Because an image of the convex surface of a lambertian does not have high frequency components, it is difficult to reconstruct an illumination distribution from the image[8] However, the important thing is that, for our purpose, there is no necessity to estimate the actual illumination distribution; only the illumination distribution that accounts for the shade and shadow of the input image is needed Up to now, the discussion in this section has been on gray scale images However, when considering color images, we have to estimate a m n for each color channel In this case, scale ambiguity of a m n affects the albedo in color channels because a m n for each color channel is estimated independently Therefore, in the case of color input images, what we get is pseudo-albedo as well as method I 4 Experiments 41 Method I: color alignment based on chromaticity Some of the input images and a 3D geometric model are shown in Figure 4 Textures were measured by a Sony DXC-900, and the 3D geometric model By a Minolta VIVID 900 Although the VIVID 900 measures textures, we decided to use the DXC-900 for texture measurements because of its high quality images First, we estimated camera parameters from the image and the 3D geometric model of a calibration box Then, we mapped input images to the 3D geometric model We took into consideration the focal length, principal point and skew as intrinsic parameters, and the rotation and translation as extrinsic parameters And normal directions of the 3D geometric model were estimated for each point By utilizing the normal directions, illumination directions were estimated for each input image Figure 5 shows the relationship between cosθ and image intensity of the input image Figure 5 confirms equation (3) that cosθ is in proportion to image intensity Figure 4: Some of input images and a 3D geometric model Figure 5: The relationship between cosθ and intensity Then, for each texture, a chromaticity-t map was estimated One of the maps is shown in Figure 6 Figure 6 shows that different points on chromaticity space have different T values By utilizing this chromaticity-t map, pseudo-albedo was estimated from each input image Figure 7 shows pseudo-albedo estimated from two input images Figure 8 shows histograms of the differences between the pseudo-albedos of overlapping regions Results from 12 input images covering almost all the region of the objects are shown in Figure 9 For the points which correspond to more than one texture, we adopted median of intensities on the points There is no color discontinuity in the final results And we see no errors on the geometric edges Figure 6: Chromaticity-T map /04 $2000 (C) 2004 IEEE

5 Figure 7: Estimated pseudo-albedo: (a) from image1, (b) from image2 Figure 8: Histograms of the difference between the estimated pseudo-albedo from image1 and the one from image2: (a) red, (b) green, (c) blue 42 Method II: color alignment based on Illumination sphere First we demonstrate results of the color alignment method based on illumination sphere when applied to synthetic images We assume lambertian reflectance on the object In this experiment, the viewpoints of the two input images are the same and only illumination conditions are different Figure 10(a) and (d) shows input images From each input image, irradiance images were estimated (Figure 10(b) and (e)) and consequently, pseudo-albedo images were calculated for the region where illuminated (Figure 10(c) and (f)) We confirmed that this method is effective where selfshadow exits Second, we show results applied to real images The input images and a 3D geometric model were measured in the same way as in Section41 In this experiment, two input images, which were measured from the different viewpoints and under different illumination conditions, were used Images of pseudo-albedo estimated from each input image are mapped to the 3D geometric model(figure 11) For the points which corresponded to more than one texture, we adopted a texture whose angle between the viewing direction and the normal direction of the point was the smallest There were no seam lines between two images(figure 11(b)) The absolute difference between the pseudo-albedo estimated from image1 and the one from image2 is shown in Figure 9: Merged pseudo-albedo Figure 12 Black and white points represent the difference of less than 10 and 10 or more, respectively Histograms of the difference between the pseudo-albedo estimated from image1 and the one from image2 are also shown in Figure 13 5 Conclusion We have presented two novel methods for color alignment in texture mapping Method I is based on chromaticity consistency on a 3D geometric model and is effective when a 3D geometric model is not perfectly acquired Method II utilizes basis images of point light sources on illumination sphere Method II is applicable to the images containing self-shadows because shadows and shadings are calculated by rendering In both methods, pseudo-albedo is estimated; thus, the color consistency between the input images is accomplished We demonstrated the effectiveness of the proposed methods by describing experiments performed on synthetic and real images Acknowledgments This project is funded by Core Research for Evolutional Science and Technology (CREST) of Japan Science and Technology Agency (JST) Supatana Auethavekiat provided many insightful comments on this paper /04 $2000 (C) 2004 IEEE

6 Figure 13: Histograms of the difference between the estimated pseudo-albedo from image1 and the one from image2: (a) red, (b) green, (c) blue Figure 10: Synthetic test Input images are image1 (a) and image2(d) The estimated irradiance images are (b) and (e) The estimated pseudo-albedo images are (c) and (f) Note that the top row is for image1 and the bottom row is for image2 References [1] E Praum, A Finkelstein, and H Hoppe, Lapped textures, in SIGGRAPH2000, 2000, pp [2] P V Sander, J Snyder, S J Gortler, and H Hoppe, Texture mapping progressive meshes, in SIG- GRAPH2001, 2001, pp [3] B Levy, Constrained texture mapping for polygonal meshes, in SIGGRAPH2001, 2001, pp [4] P J Neugebauer and K Klein, Texturing 3D models of real world objects from multiple unregistered photographic views, in EUROGRAPHICS 99, 1999 [5] R Kurazume, M D Wheeler, and K Ikeuchi, Mapping textures on 3D geometric model using reflectance image, in Data Fusion Workshop in IEEE Int Conf on Robotics and Automation, 2001 Figure 11: Estimated pseudo-albedos: (a)image1, (b)both, (c)image2 [6] R Kurazume, K Nishino, Z Zhang, and K Ikeuchi, Simultaneous 2D images and 3D geometric model registration for texture mapping utilizing reflectance attribute, in Fifth Asian Conference on Computer Vision (ACCV), 2002, vol 1, pp [7] H Lensch, W Heidrich, and H-P Seidel, Automated texture registration and stitching for real world models, in Pacific Graphics 00, 2000, pp [8] Stephen R Marschner and Donald P Greenberg, Inverse lighting for photography, in IS&T/SID Fifth Color Imaging Conference, Nov 1997, pp [9] E Beauchesne and S Roy, Automatic relighting of overlapping textures of a 3D model, in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2003, pp Figure 12: Absolute difference between the estimated pseudo-albedo from image1 and the one from image /04 $2000 (C) 2004 IEEE

Virtual Reality Model of Koumokuten Generated from Measurement

Virtual Reality Model of Koumokuten Generated from Measurement Virtual Reality Model of Koumokuten Generated from Measurement Hiroki UNTEN Graduate School of Information Science and Technology The University of Tokyo unten@cvl.iis.u-tokyo.ac.jp Katsushi IKEUCHI Graduate

More information

Simultanious texture alignment using reflectance image and epipolar constraint

Simultanious texture alignment using reflectance image and epipolar constraint Zhengyou Zhang Microsoft Research, Inc. 3 3 3 2 Simultanious texture alignment using reflectance image and epipolar constraint Ryo Kurazume Ko Nishino Zhengyou Zhang Kyushu University The University of

More information

Mapping textures on 3D geometric model using reflectance image

Mapping textures on 3D geometric model using reflectance image Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi Ikeuchi The University of Tokyo Cyra Technologies, Inc. The University of Tokyo fkurazume,kig@cvl.iis.u-tokyo.ac.jp

More information

Recovering illumination and texture using ratio images

Recovering illumination and texture using ratio images Recovering illumination and texture using ratio images Alejandro Troccoli atroccol@cscolumbiaedu Peter K Allen allen@cscolumbiaedu Department of Computer Science Columbia University, New York, NY Abstract

More information

Lecture 24: More on Reflectance CAP 5415

Lecture 24: More on Reflectance CAP 5415 Lecture 24: More on Reflectance CAP 5415 Recovering Shape We ve talked about photometric stereo, where we assumed that a surface was diffuse Could calculate surface normals and albedo What if the surface

More information

Epipolar geometry contd.

Epipolar geometry contd. Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

Skeleton Cube for Lighting Environment Estimation

Skeleton Cube for Lighting Environment Estimation (MIRU2004) 2004 7 606 8501 E-mail: {takesi-t,maki,tm}@vision.kuee.kyoto-u.ac.jp 1) 2) Skeleton Cube for Lighting Environment Estimation Takeshi TAKAI, Atsuto MAKI, and Takashi MATSUYAMA Graduate School

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Re-rendering from a Dense/Sparse Set of Images

Re-rendering from a Dense/Sparse Set of Images Re-rendering from a Dense/Sparse Set of Images Ko Nishino Institute of Industrial Science The Univ. of Tokyo (Japan Science and Technology) kon@cvl.iis.u-tokyo.ac.jp Virtual/Augmented/Mixed Reality Three

More information

Simultaneous 2D images and 3D geometric model registration for texture mapping utilizing reflectance attribute

Simultaneous 2D images and 3D geometric model registration for texture mapping utilizing reflectance attribute ACCV2002: The 5th Asian Conference on Computer Vision, 23 25 January 2002, Melbourne, Australia Simultaneous 2D images and 3D geometric model registration for texture mapping utilizing reflectance attribute

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources Image Based Lighting with Near Light Sources Shiho Furuya, Takayuki Itoh Graduate School of Humanitics and Sciences, Ochanomizu University E-mail: {shiho, itot}@itolab.is.ocha.ac.jp Abstract Recent some

More information

Photometric Stereo with Auto-Radiometric Calibration

Photometric Stereo with Auto-Radiometric Calibration Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp

More information

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor Takafumi Taketomi, Tomokazu Sato, and Naokazu Yokoya Graduate School of Information

More information

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray

Radiance. Pixels measure radiance. This pixel Measures radiance along this ray Photometric stereo Radiance Pixels measure radiance This pixel Measures radiance along this ray Where do the rays come from? Rays from the light source reflect off a surface and reach camera Reflection:

More information

Digital Restoration of the Cultural Heritages

Digital Restoration of the Cultural Heritages Digital Restoration of the Cultural Heritages Abstract Takeshi Oishi, Tomohito Masuda, Katsushi Ikeushi Institute of Industrial Science, University of Tokyo 4-6-1, Komaba, Meguro-ku, Tokyo 153-8505 Japan

More information

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface. Lighting and Photometric Stereo CSE252A Lecture 7 Announcement Read Chapter 2 of Forsyth & Ponce Might find section 12.1.3 of Forsyth & Ponce useful. HW Problem Emitted radiance in direction f r for incident

More information

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein Lecture 23: Photometric Stereo Announcements PA3 Artifact due tonight PA3 Demos Thursday Signups close at 4:30 today No lecture on Friday Last Time:

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

Eigen-Texture Method : Appearance Compression based on 3D Model

Eigen-Texture Method : Appearance Compression based on 3D Model Eigen-Texture Method : Appearance Compression based on 3D Model Ko Nishino Yoichi Sato Katsushi Ikeuchi Institute of Industrial Science, The University of Tokyo 7-22-1 Roppongi, Minato-ku, Tokyo 106-8558,

More information

Face Relighting with Radiance Environment Maps

Face Relighting with Radiance Environment Maps Face Relighting with Radiance Environment Maps Zhen Wen University of Illinois Urbana Champaign zhenwen@ifp.uiuc.edu Zicheng Liu Microsoft Research zliu@microsoft.com Tomas Huang University of Illinois

More information

Face Relighting with Radiance Environment Maps

Face Relighting with Radiance Environment Maps Face Relighting with Radiance Environment Maps Zhen Wen Zicheng Liu Thomas S. Huang University of Illinois Microsoft Research University of Illinois Urbana, IL 61801 Redmond, WA 98052 Urbana, IL 61801

More information

ENGN D Photography / Spring 2018 / SYLLABUS

ENGN D Photography / Spring 2018 / SYLLABUS ENGN 2502 3D Photography / Spring 2018 / SYLLABUS Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized cameras routinely

More information

Rendering Synthetic Objects into Real Scenes. based on [Debevec98]

Rendering Synthetic Objects into Real Scenes. based on [Debevec98] Rendering Synthetic Objects into Real Scenes based on [Debevec98] Compositing of synthetic objects Geometry consistency needed: geometric model of synthetic objects needed: (coarse) geometric model of

More information

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7 Lighting and Photometric Stereo Photometric Stereo HW will be on web later today CSE5A Lecture 7 Radiometry of thin lenses δa Last lecture in a nutshell δa δa'cosα δacos β δω = = ( z' / cosα ) ( z / cosα

More information

Tutorial Notes for the DAGM 2001 A Framework for the Acquisition, Processing and Interactive Display of High Quality 3D Models

Tutorial Notes for the DAGM 2001 A Framework for the Acquisition, Processing and Interactive Display of High Quality 3D Models χfiχfi k INFORMATIK Tutorial Notes for the DAGM 2001 A Framework for the Acquisition, Processing and Interactive Display of High Quality 3D Models Research Report MPI-I-2001-4-005 September 2001 Hendrik

More information

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Efficient Rendering of Glossy Reflection Using Graphics Hardware Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,

More information

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry*

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry* Yang Wang, Dimitris Samaras Computer Science Department, SUNY-Stony Stony Brook *Support for this research was provided

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

Digitization of 3D Objects for Virtual Museum

Digitization of 3D Objects for Virtual Museum Digitization of 3D Objects for Virtual Museum Yi-Ping Hung 1, 2 and Chu-Song Chen 2 1 Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan 2 Institute of

More information

And if that 120MP Camera was cool

And if that 120MP Camera was cool Reflectance, Lights and on to photometric stereo CSE 252A Lecture 7 And if that 120MP Camera was cool Large Synoptic Survey Telescope 3.2Gigapixel camera 189 CCD s, each with 16 megapixels Pixels are 10µm

More information

Polarization-based Transparent Surface Modeling from Two Views

Polarization-based Transparent Surface Modeling from Two Views Polarization-based Transparent Surface Modeling from Two Views Daisuke Miyazaki Masataka Kagesawa y Katsushi Ikeuchi y The University of Tokyo, Japan http://www.cvl.iis.u-tokyo.ac.jp/ Abstract In this

More information

Physics-based Vision: an Introduction

Physics-based Vision: an Introduction Physics-based Vision: an Introduction Robby Tan ANU/NICTA (Vision Science, Technology and Applications) PhD from The University of Tokyo, 2004 1 What is Physics-based? An approach that is principally concerned

More information

Multi-Projector Display with Continuous Self-Calibration

Multi-Projector Display with Continuous Self-Calibration Multi-Projector Display with Continuous Self-Calibration Jin Zhou Liang Wang Amir Akbarzadeh Ruigang Yang Graphics and Vision Technology Lab (GRAVITY Lab) Center for Visualization and Virtual Environments,

More information

Using a Raster Display Device for Photometric Stereo

Using a Raster Display Device for Photometric Stereo DEPARTMEN T OF COMP UTING SC IENC E Using a Raster Display Device for Photometric Stereo Nathan Funk & Yee-Hong Yang CRV 2007 May 30, 2007 Overview 2 MODEL 3 EXPERIMENTS 4 CONCLUSIONS 5 QUESTIONS 1. Background

More information

Photometric Stereo.

Photometric Stereo. Photometric Stereo Photometric Stereo v.s.. Structure from Shading [1] Photometric stereo is a technique in computer vision for estimating the surface normals of objects by observing that object under

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

1-2 Feature-Based Image Mosaicing

1-2 Feature-Based Image Mosaicing MVA'98 IAPR Workshop on Machine Vision Applications, Nov. 17-19, 1998, Makuhari, Chibq Japan 1-2 Feature-Based Image Mosaicing Naoki Chiba, Hiroshi Kano, Minoru Higashihara, Masashi Yasuda, and Masato

More information

REFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES

REFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES REFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES T. Yamakawa a, K. Fukano a,r. Onodera a, H. Masuda a, * a Dept. of Mechanical Engineering and Intelligent Systems, The University of Electro-Communications,

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis Ko Nishino, Zhengyou Zhang and Katsushi Ikeuchi Dept. of Info. Science, Grad.

More information

Il colore: acquisizione e visualizzazione. Lezione 17: 11 Maggio 2012

Il colore: acquisizione e visualizzazione. Lezione 17: 11 Maggio 2012 Il colore: acquisizione e visualizzazione Lezione 17: 11 Maggio 2012 The importance of color information Precision vs. Perception 3D scanned geometry Photo Color and appearance Pure geometry Pure color

More information

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction The central problem in computer graphics is creating, or rendering, realistic computergenerated images that are indistinguishable from real photographs, a goal referred to as photorealism.

More information

Model-based Enhancement of Lighting Conditions in Image Sequences

Model-based Enhancement of Lighting Conditions in Image Sequences Model-based Enhancement of Lighting Conditions in Image Sequences Peter Eisert and Bernd Girod Information Systems Laboratory Stanford University {eisert,bgirod}@stanford.edu http://www.stanford.edu/ eisert

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Real-Time Video-Based Rendering from Multiple Cameras

Real-Time Video-Based Rendering from Multiple Cameras Real-Time Video-Based Rendering from Multiple Cameras Vincent Nozick Hideo Saito Graduate School of Science and Technology, Keio University, Japan E-mail: {nozick,saito}@ozawa.ics.keio.ac.jp Abstract In

More information

Computed Photography - Final Project Endoscope Exploration on Knee Surface

Computed Photography - Final Project Endoscope Exploration on Knee Surface 15-862 Computed Photography - Final Project Endoscope Exploration on Knee Surface Chenyu Wu Robotics Institute, Nov. 2005 Abstract Endoscope is widely used in the minimally invasive surgery. However the

More information

Il colore: acquisizione e visualizzazione. Lezione 20: 11 Maggio 2011

Il colore: acquisizione e visualizzazione. Lezione 20: 11 Maggio 2011 Il colore: acquisizione e visualizzazione Lezione 20: 11 Maggio 2011 Outline The importance of color What is color? Material properties vs. unshaded color Texture building from photos Image registration

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

Department of Computer Engineering, Middle East Technical University, Ankara, Turkey, TR-06531

Department of Computer Engineering, Middle East Technical University, Ankara, Turkey, TR-06531 INEXPENSIVE AND ROBUST 3D MODEL ACQUISITION SYSTEM FOR THREE-DIMENSIONAL MODELING OF SMALL ARTIFACTS Ulaş Yılmaz, Oğuz Özün, Burçak Otlu, Adem Mulayim, Volkan Atalay {ulas, oguz, burcak, adem, volkan}@ceng.metu.edu.tr

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Recognizing Buildings in Urban Scene of Distant View ABSTRACT

Recognizing Buildings in Urban Scene of Distant View ABSTRACT Recognizing Buildings in Urban Scene of Distant View Peilin Liu, Katsushi Ikeuchi and Masao Sakauchi Institute of Industrial Science, University of Tokyo, Japan 7-22-1 Roppongi, Minato-ku, Tokyo 106, Japan

More information

Object Shape and Reflectance Modeling from Observation

Object Shape and Reflectance Modeling from Observation Object Shape and Reflectance Modeling from Observation Yoichi Sato 1, Mark D. Wheeler 2, and Katsushi Ikeuchi 1 ABSTRACT 1 Institute of Industrial Science University of Tokyo An object model for computer

More information

Visual Appearance and Color. Gianpaolo Palma

Visual Appearance and Color. Gianpaolo Palma Visual Appearance and Color Gianpaolo Palma LIGHT MATERIAL Visual Appearance Color due to the interaction between the lighting environment (intensity, position, ) and the properties of the object surface

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 3, 2017 Lecture #13 Program of Computer Graphics, Cornell University General Electric - 167 Cornell in

More information

Structured light 3D reconstruction

Structured light 3D reconstruction Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Light & Perception Announcements Quiz on Tuesday Project 3 code due Monday, April 17, by 11:59pm artifact due Wednesday, April 19, by 11:59pm Can we determine shape

More information

Capture and Displays CS 211A

Capture and Displays CS 211A Capture and Displays CS 211A HDR Image Bilateral Filter Color Gamut Natural Colors Camera Gamut Traditional Displays LCD panels The gamut is the result of the filters used In projectors Recent high gamut

More information

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison CHAPTER 9 Classification Scheme Using Modified Photometric Stereo and 2D Spectra Comparison 9.1. Introduction In Chapter 8, even we combine more feature spaces and more feature generators, we note that

More information

Research White Paper WHP 143. Multi-camera radiometric surface modelling for image-based re-lighting BRITISH BROADCASTING CORPORATION.

Research White Paper WHP 143. Multi-camera radiometric surface modelling for image-based re-lighting BRITISH BROADCASTING CORPORATION. Research White Paper WHP 143 11 January 2007 Multi-camera radiometric surface modelling for image-based re-lighting Oliver Grau BRITISH BROADCASTING CORPORATION Multi-camera radiometric surface modelling

More information

Introduction to Computer Vision. Week 8, Fall 2010 Instructor: Prof. Ko Nishino

Introduction to Computer Vision. Week 8, Fall 2010 Instructor: Prof. Ko Nishino Introduction to Computer Vision Week 8, Fall 2010 Instructor: Prof. Ko Nishino Midterm Project 2 without radial distortion correction with radial distortion correction Light Light Light! How do you recover

More information

Real Time 3D Environment Modeling for a Mobile Robot by Aligning Range Image Sequences

Real Time 3D Environment Modeling for a Mobile Robot by Aligning Range Image Sequences Real Time 3D Environment Modeling for a Mobile Robot by Aligning Range Image Sequences Ryusuke Sagawa, Nanaho Osawa, Tomio Echigo and Yasushi Yagi Institute of Scientific and Industrial Research, Osaka

More information

3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY

3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY IJDW Volume 4 Number January-June 202 pp. 45-50 3D FACE RECONSRUCION BASED ON EPIPOLAR GEOMERY aher Khadhraoui, Faouzi Benzarti 2 and Hamid Amiri 3,2,3 Signal, Image Processing and Patterns Recognition

More information

Announcements. Photometric Stereo. Shading reveals 3-D surface geometry. Photometric Stereo Rigs: One viewpoint, changing lighting

Announcements. Photometric Stereo. Shading reveals 3-D surface geometry. Photometric Stereo Rigs: One viewpoint, changing lighting Announcements Today Photometric Stereo, next lecture return to stereo Photometric Stereo Introduction to Computer Vision CSE152 Lecture 16 Shading reveals 3-D surface geometry Two shape-from-x methods

More information

Rendering Light Reflection Models

Rendering Light Reflection Models Rendering Light Reflection Models Visual Imaging in the Electronic Age Donald P. Greenberg October 27, 2015 Lecture #18 Goal of Realistic Imaging The resulting images should be physically accurate and

More information

Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo

Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo Miao Liao 1, Xinyu Huang, and Ruigang Yang 1 1 Department of Computer Science, University of Kentucky Department of Mathematics

More information

Rectification of Aerial 3D Laser Scans via Line-based Registration to Ground Model

Rectification of Aerial 3D Laser Scans via Line-based Registration to Ground Model [DOI: 10.2197/ipsjtcva.7.89] Express Paper Rectification of Aerial 3D Laser Scans via Line-based Registration to Ground Model Ryoichi Ishikawa 1,a) Bo Zheng 1,b) Takeshi Oishi 1,c) Katsushi Ikeuchi 1,d)

More information

Factorization Method Using Interpolated Feature Tracking via Projective Geometry

Factorization Method Using Interpolated Feature Tracking via Projective Geometry Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,

More information

Re-rendering from a Sparse Set of Images

Re-rendering from a Sparse Set of Images Re-rendering from a Sparse Set of Images Ko Nishino, Drexel University Katsushi Ikeuchi, The University of Tokyo and Zhengyou Zhang, Microsoft Research Technical Report DU-CS-05-12 Department of Computer

More information

General Principles of 3D Image Analysis

General Principles of 3D Image Analysis General Principles of 3D Image Analysis high-level interpretations objects scene elements Extraction of 3D information from an image (sequence) is important for - vision in general (= scene reconstruction)

More information

Shape and Appearance from Images and Range Data

Shape and Appearance from Images and Range Data SIGGRAPH 2000 Course on 3D Photography Shape and Appearance from Images and Range Data Brian Curless University of Washington Overview Range images vs. point clouds Registration Reconstruction from point

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

An Algorithm for Seamless Image Stitching and Its Application

An Algorithm for Seamless Image Stitching and Its Application An Algorithm for Seamless Image Stitching and Its Application Jing Xing, Zhenjiang Miao, and Jing Chen Institute of Information Science, Beijing JiaoTong University, Beijing 100044, P.R. China Abstract.

More information

1216 P a g e 2.1 TRANSLATION PARAMETERS ESTIMATION. If f (x, y) F(ξ,η) then. f(x,y)exp[j2π(ξx 0 +ηy 0 )/ N] ) F(ξ- ξ 0,η- η 0 ) and

1216 P a g e 2.1 TRANSLATION PARAMETERS ESTIMATION. If f (x, y) F(ξ,η) then. f(x,y)exp[j2π(ξx 0 +ηy 0 )/ N] ) F(ξ- ξ 0,η- η 0 ) and An Image Stitching System using Featureless Registration and Minimal Blending Ch.Rajesh Kumar *,N.Nikhita *,Santosh Roy *,V.V.S.Murthy ** * (Student Scholar,Department of ECE, K L University, Guntur,AP,India)

More information

Mixture of Spherical Distributions for Single-View Relighting

Mixture of Spherical Distributions for Single-View Relighting , 2007 1 Mixture of Spherical Distributions for Single-View Relighting Kenji Hara, Member, IEEE Ko Nishino, Member, IEEE Katsushi Ikeuchi, Fellow, IEEE K. Hara is with the Department of Visual Communication

More information

Acquisition and Visualization of Colored 3D Objects

Acquisition and Visualization of Colored 3D Objects Acquisition and Visualization of Colored 3D Objects Kari Pulli Stanford University Stanford, CA, U.S.A kapu@cs.stanford.edu Habib Abi-Rached, Tom Duchamp, Linda G. Shapiro and Werner Stuetzle University

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about Image-Based Modeling and Rendering Image-Based Modeling and Rendering MIT EECS 6.837 Frédo Durand and Seth Teller 1 Some slides courtesy of Leonard McMillan, Wojciech Matusik, Byong Mok Oh, Max Chen 2

More information

Camera Calibration for a Robust Omni-directional Photogrammetry System

Camera Calibration for a Robust Omni-directional Photogrammetry System Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Recovering light directions and camera poses from a single sphere.

Recovering light directions and camera poses from a single sphere. Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,

More information

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Ryusuke Homma, Takao Makino, Koichi Takase, Norimichi Tsumura, Toshiya Nakaguchi and Yoichi Miyake Chiba University, Japan

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

Face Re-Lighting from a Single Image under Harsh Lighting Conditions

Face Re-Lighting from a Single Image under Harsh Lighting Conditions Face Re-Lighting from a Single Image under Harsh Lighting Conditions Yang Wang 1, Zicheng Liu 2, Gang Hua 3, Zhen Wen 4, Zhengyou Zhang 2, Dimitris Samaras 5 1 The Robotics Institute, Carnegie Mellon University,

More information

Generation of Binocular Object Movies from Monocular Object Movies

Generation of Binocular Object Movies from Monocular Object Movies Generation of Binocular Object Movies from Monocular Object Movies Ying-Ruei Chen 3, Wan-Yen Lo 3, Yu-Pao Tsai 1,4, and Yi-Ping Hung 1,2,3 1 Institute of Information Science, Academia Sinica, Taipei, Taiwan

More information

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Topics to be Covered in the Rest of the Semester CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Charles Stewart Department of Computer Science Rensselaer Polytechnic

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION

AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION Khairil Azmi, Shintaro Ono, Masataka Kagesawa, Katsushi Ikeuchi Institute of Industrial Science,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Geometric and Photometric Restoration of Distorted Documents

Geometric and Photometric Restoration of Distorted Documents Geometric and Photometric Restoration of Distorted Documents Mingxuan Sun, Ruigang Yang, Lin Yun, George Landon, Brent Seales University of Kentucky Lexington, KY Michael S. Brown Cal State - Monterey

More information