3D Object Model Acquisition from Silhouettes

Size: px
Start display at page:

Download "3D Object Model Acquisition from Silhouettes"

Transcription

1 4th International Symposium on Computing and Multimedia Studies 1 3D Object Model Acquisition from Silhouettes Masaaki Iiyama Koh Kakusho Michihiko Minoh Academic Center for Computing and Media Studies Kyoto University Yoshida Honmachi Sakyo Kyoto JAPAN iiyama@media.kyoto-u.ac.jp Keywords: Shape from Silhouettes Articulate Object Photometric Stereo EM Algorithm Abstract This paper proposes an approach for acquiring a 3D object model from silhouettes. Photometry geometry and motion of the object consist the model. Aquisition of 3D object models requires restrictions as to an object s shape and it also requires prepared object s shape. We show three approaches for removing the restriction and preparation. 1 Introduction This paper proposes an approach for acquiring a 3D object model from silhouettes. The 3D object model is a set of object properties and the properties are necessary for reproducing the object s appearance. The object s appearance is given by various factors: Objects vary their appearance varying their pose at all times. Lighting environments also vary the object s appearance. Object s surface reflects incident lights and the reflection provides the object s appearance. Reflectance properties and surface normals provide the reflection. Viewpoints also vary the object s appearance. When the viewpoint is given object s shape provides the appearance. The object s appearance is given by three properties; photometry modeled by reflection properties geometry modeled by object s shape and motion modeled by a sequential object s pose. A 3D object model with these three properties provides the object s appearance. Reproducing the object s appearance makes an important role in various applications. The applications include teleconference virtual museums and rapid prototyping. Traditional teleconference systems observe a scene by cameras and transmit the camera images as they are. Requirement of the teleconference is realistic image but the camera images do not have enough reality. Replacement of such traditional images by new images which have enough reality is desired task. Synthesizing arbitrary views from camera images is one of the solutions; it provides views which traditional teleconference systems could not provide. The arbitrary views will overcome the lack of reality. The 3D object model which can synthesize the arbitrary views is required. Virtual museum is a collection of digitalized specimen. The collection includes digitalized various objects in the world. Ordinary virtual museum collects static objects such as painting sculpture and artifact. Requirement of virtual museum is enabling something new which traditional non-virtual museum could not enable. Collecting moving objects such as insects as digitalized specimen answers the requirement. Traditional non-virtual museums collect moving objects but they can not provide how the objects had moved. On the contrary the virtual museum can provide the object s appearance with arbitrary pose when the motions of them are captured as 3D object model. To realize such virtual museum we have to reconstruct the 3D model which includes object s motion. Rapid prototyping is a technology for the speedy fabrication of sample parts for demonstration evaluation or testing. Measuring a shape of a mock-up is one of the important processes. Measured shape data is converted into CAD data and is used for product development. Rapid prototyping requires accuracy for the 3D model. In order to create accurate 3D model smooth surfaces and concave surfaces of objects should be modeled. To realize rapid prototyping with the 3D model we have to reconstruct accurate 3D shape of sample parts which includes smooth and concave surfaces.

2 2 4th International Symposium on Computing and Multimedia Studies These applications require 3D models of various objects. Versatile approach is required for acquiring the 3D models of various objects. The use of silhouettes for acquiring 3D object models satisfies the requirement because object s silhouette can be robustly extracted. Acquisition of 3D object models from silhouettes requires restriction as to an object s shape and it also requires prepared object s shape: Many works had acquired object s reflectance properties by using pre-measured object s shape. Many works had acquired object s motion by using a rough shape of object. Shape from silhouettes has a restriction on object s surfaces; it can not measure any concave surface. In this paper we show approaches for removing the restriction and preparation. In section 2 we show an approach by which the reflectance properties and the shape are acquired at the same time. The approach does not require any pre-measured shape. An approach which acquires the object s motion without using prepared object s shape is described in section 3. A method for measuring the concave and smooth surface is described in section 4. 2 Reflection Properties Acquisition Previously proposed methods had acquired reflection properties by using an object s shape. Our method acquires both the shape and the reflection properties. Our method acquires the shape from silhouettes using the volume intersection method. The shape is acquired as a set of voxels. It also acquires the reflection properties of each voxel. Previously proposed method which had acquired both the shape and the reflection properties required heavy computational cost. In our method the calculation is done independently at each voxel; it reduces the computational cost. Reconstructed reflection property represents both diffuse reflection and specular reflection at each surface voxel. Based on the Torrance-Sparrow reflection model we propose and improved reflection model which is suitable for the voxel-independent reconstruction. Reconstruction process consists of three steps; first the surface voxels are extracted. Then surface normal at each surface voxel is calculated. Finally its reflection property is estimated. Figure 1: Volume intersection method 2.1 Surface Normal Estimation The object shape is reconstructed as a visual hull. A set of voxel consist the visual hull. Surface voxel is a voxel some of whose neighbor voxels are not included in the visual hull. The surface voxel has its surface normal and its reflection properties. When we project the surface voxel into the camera images it is projected on the border of at least one of the silhouettes. We call a pixel on the border an edge pixel. An edge pixel has its 2D surface

3 4th International Symposium on Computing and Multimedia Studies 3 normal and the normal can be extracted only from the silhouette; so the normal can be extracted voxelindependently. The surface normal of the surface voxel is parallel to the 2D surface normal and it is orthogonal to a view line on the edge pixel; it is extracted voxel-independently. 2.2 Reflection Properties Estimation To estimate the reflection properties we use simplified Trannce-Sparrow reflection model. The model has two parameters; diffuse reflection coefficient and specular reflection coefficient. They are estimated from images which observe the surface voxel. We use the surface normal and cameras position to determine which images observe the surface voxel. 2.3 Experimental Results A polyvinyl chloride blue ball whose size is 22.5cm in diameter was used for the experiments. Each camera has pixels and observes [cm] region. We set each voxel size 0.5cm. The total number of voxels is = The shape and color property were reconstructed from images taken by eight cameras. The reconstruction results are shown in Figure2. The shape of the ball is reconstructed by using the visual hull method (Figure2 (ii)). Figure2 (iii) and (iv) are synthesized views from two cameras viewpoints. Figure2 (v) is a synthesized view which synthesizes only the diffuse color from the same camera s viewpoint as Figure2 (iii). Synthesized views from viewpoints where the cameras are not arranged are shown in Figure2 (vi) (vii) and (viii). (i) input image (ii) reconstucted volume (iii) a camera view (iv) the other camera view (v) diffuse color(camera(ii)) (vi) virtual viewpoint 1 (vii) virtual viewpoint 2 (viii) virtual viewpoint 3 Figure 2: Reconstruction results. Comparison between Figure2 (iii) and (v) shows that our method reconstructs not only a diffused color but also a specular color. As can be seen from the high-light on an input image in Figure2 (i) the position of the high-light is changed when the viewpoint is changed; this result also confirmed that our method reconstructs a specular color. 3 Motion Acquisition A method to acquire the motion of an articulated object is described. Several methods to estimate an articulate motion had been proposed. These methods require a shape model of each body parts. The

4 4 4th International Symposium on Computing and Multimedia Studies requirement is not suitable for the 3D object model acquisition because the applications described in section 1 require 3D models of various objects and preparing the shape model of each objects takes a lot of costs. Our method measures both the shape of body parts and the articulate motion at the same time. A whole shape of an articulate object is acquired with the volume intersection method. Making a correspondence between regions acquired in different times provides us the motion and the shape of the body parts. Unnecessary voxels in the visual hull makes the correspondence difficult. We propose a multidimesional voxel feature which is not affected by the unnecessary voxels and make the correspondence by using the feature. A whole shape of an articulate object is acquired as a visual hull and a set of voxels consist the visual hull. Successive visual hulls are used to acquire the shape and the motion. All the voxel in a body part are always under the same rigid motion. We extract such voxels from the whole shape and the voxels keeps the stablability. It is difficult to know the areas where no unnecessary voxels exist however. Our solution for this difficulty is the use of multi-dimensional distance. A multi-dimensional distance contains distances along several directions. The distances along some directions will receive effects from the unnecessary voxels. The distances along other directions receive no effect from it. The use of part of multi-dimensional distance instead of using whole multi-dimensional distance overcomes the effect from unnecessary voxels. 3.1 Experimental results A cow model shown in Figure3(b) is used for the experiment. The cow model consists five parts; a body and four legs. Ten frames of walking motion data observed by 20 cameras are used as input image sequences. The walking motion consists four different rotation: each leg rotates backwards and forwards. Figure3(a) shows sequential visual hull reconstructed from the input. Each visual hull consists approximately voxels. Figure3(c) shows the acquired part with our method. Figure3(d) illustrates the same result as Figure3(c) and zooms in a joint between the body and the right-front leg. Figure3(c) shows that the shape of five parts of the cow model are acquired by using our method. 4 Shape Acquisition The volume intersection method has an advantage over other methods. The advantage is that the method can acquire the shape of texture-less object. The volume intersection method requires object s silhouettes and does not require point correspondence which other methods require. Extracting silhouettes of texture-less is easier task than obtaining point correspondence. The volume intersection method has also a disadvantage. The disadvantage is the difficulty of acquiring smooth and concave surfaces. The visual hull which is acquired by the volume intersection method is convex hull circumscribing the object. Acquiring concave surface with the volume intersection method is impossible. The volume intersection method requires many cameras in order to acquire a smooth surface even if the surface is not a concave surface. We employ photometric stereo in order to acquire the smooth and concave surfaces. Photometric stereo estimates surface normals as a needle map. The needle map contains surface normals of concave surfaces. The needle map acquired by photometric stereo does not directly express a shape of the object however. Acquiring the shape requires reconstruction of a distance map from the needle map. Maximizing the following consistency gives the distance map. The consistency is that the needle map is consistent with the surface normal derived from the distance map. We call the consistency needle map

5 4th International Symposium on Computing and Multimedia Studies 5 (a) View Volume V t0 V t 9 (b) 3D cow model at t 0 (c) Segmented result (d) Segmented result(zoom) (e) Motion after the segmentation (f) Motion after the segmentation Figure 3: Simulation results. consistency. Depth edges make it difficult to calculate the consistency. A depth edge is an area on the distance map: On the area a depth from camera to the surface varies discontinuously. The discontinuity disables calculation of surface normal; it means that existence of depth edges disables a calculation of the needle map consistency. We propose an approach which uses silhouettes taken from different viewpoints. The silhouettes reduce the bad effects of depth edge. An incorrect depth image is not consistent with the silhouettes which are taken from other view points. Based on this fact our method minimizes two types of energy to reconstruct the depth image: one energy is based on a consistency between depth image and needle map and the other is based on a consistency between depth image and silhouette. Depth Edge Body Region Pole Region Figure 4: Depth edge

6 6 4th International Symposium on Computing and Multimedia Studies Our method uses multiple camera. Let the number of cameras to be C and let a silhouette on camera c (c = 1... C) to be S c and let the number of pixels on S c to be M c and let each pixel to be m c i (i = 1... M c ) and let n c i to be a normal vector of a surface observed by mc i. Let us explain a pixel m c i and a distance map. A pixel mc i included in a silhouette S c occupies the square region on the image. We describe it ([x c i xc i + 1) [yc i yc i + 1)) and call a point (xc i yc i ) representative point of m c i. A distance of a representative point of mc i is defined as a distance between focal point of the camera c and a point on the surface projected on the representative point of m c i. We denote the distance as Z(x c i yc i ). A distance map is a 2D matrix which consists the distances of each representative points. 4.1 Needle Map Consistency Let Z(x c i yc i ) to be a depth of a pixel mc i and let a surface normal which is observed by the pixel mc i to be n c i = (pc i qc i 1)T. n i c X i c Z( x i yi ) m i c ( x i +1 y i +1) ( x i yi +1) ( x i +1 y i ) ( x i yi ) Figure 5: Needle map vs surface normal Let us suppose that the surface observed by m c i is a plane containing surface normal nc i. Depths of three points (x c i + 1 yc i )(xc i yc i + 1) and (xc i + 1 yc i + 1) are given by nc i and Z(xc i yc i ). When we express the depths Z 1 Z ±0 Z 1 they can be written ±0 1 1 ( ) Z 1 (x c i + 1 yi c ) =Z(x c i yi c ) 1 + pc i (1) ±0 f c ( ) Z ±0 (x c i yi c + 1) =Z(x c i yi c ) 1 + qc i (2) 1 f c ( ) Z 1 (x c i + 1 yi c + 1) =Z(x c i yi c ) 1 + pc i + qc i (3) 1 f c f c where f c is a focal length of the camera c. These depths show that we have four ways of acquiring the depth of (x c i yc i ); Z(xc i yc i ) a depth from (x c i 1 yc i ) with Equation1 a depth from (xc i yc i 1) with Equation2 and a depth from (xc i 1 yc i 1) with Equation3. These ways give the needle map consistency. When the four depths are close together they provide a high consistency. It gives the following energy expressing the needle map consistency. Lower energy shows the higher consistency. E N = m c i Sc ( Z(xc i yi c ) Z 1 (x c i yi c ) ±0 2 + Z(xc i yi c ) Z ±0 (x c i yi c ) Z(xc i yi c ) Z 1 (x c i yi c ) 1 2 ) (4)

7 4th International Symposium on Computing and Multimedia Studies Silhouette Consistency Visual Hull Line Some pixels included in a silhouette are adjacent to pixels which are not included in the silhouette. We call such pixels edge pixels. Pixels containing at least one of 8-neighbor pixels which are not included in a silhouette are extracted by our method. Suppose a view-line which starts from the focal point of a camera and passes through a representative point of the edge pixel. Projecting the view-line to the other camera gives a 2D line on this camera s image. The 2D line intersects a silhouette on this camera s image. That is some parts of the 2D line are included in the silhouette. In other words some parts of the view-line are projected into the silhouette. A part of the view-line which is projected into all the silhouettes is called a visual hull line. Ignoring a sampling error we consider that the visual hull lines consist the surface of the visual hull. The silhouettes of the object completely correspond with silhouettes of the object s visual hull. The visual hull line gives the following constraints. The object never intersects any visual hull lines. The visual hull line is tangent to the object at more than one point. These constraints are called visual hull line constraints. 4.3 Experimental Results A orange toy is used for the experiment. We put the toy into the center of our multicamera system[1] and acquired input images with 8 cameras and 24 lights. Each camera has pixels and 4 cameras are arranged on the front side of the toy and the other 4 camera are on the back side. Twelve lights irruminate the toy from the front side of the toy and the other lights irruminate the toy from the back side. Figure6 shows reconstructed whole shape by integrating all cameras depth maps. As Figure6(a) shows Visual Hull which uses only the silhouette consistency does not reconstructs smooth surface. Figure6(b) is a result by using needle map consistency and not using silhouette consistency. Figure6(b) shows that lack of silhouette consistency produces unnatural shape. On the contrary using the silhouette and needle map consistency does not produce such unnnatural shape as Figure6(c) shows. These results show the effectiveness of our method for objects on which depth edges exist. (a) visual hull (b) only needle map constraint (c) proposed method Figure 6: Recovered shape

8 8 4th International Symposium on Computing and Multimedia Studies 5 Conclusion This paper proposed approaches for acquiring a 3D object model. When we try to acquire the 3D object model we meet the problem; the restriction and preparation of object s shape. The proposed approaches solved the problem. Voxel-independent approach reduced the computational cost for acquiring the shape and the reflection properties the cost was the problem other methods had. The use of multi-dimensional voxel feature solved the problem of the unnecessary voxels and acquired the body parts and the motion of articulate objects without preparing any shape model. The silhouette constraint solved the problem of the depth edge and the use of the constraint enabled us to acquire the concave and smooth surface. References [1] MINOH Michihiko IIYAMA Masaaki KAMEDA Yoshinari. 4pi measurement system: A complete volume reconstruction system for freely-moving objects. In IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI2003) pages p

Silhouette Extraction with Random Pattern Backgrounds for the Volume Intersection Method

Silhouette Extraction with Random Pattern Backgrounds for the Volume Intersection Method Silhouette Extraction with Random Pattern Backgrounds for the Volume Intersection Method Masahiro Toyoura Graduate School of Informatics Kyoto University Masaaki Iiyama Koh Kakusho Michihiko Minoh Academic

More information

Dynamic 3D Shape From Multi-viewpoint Images Using Deformable Mesh Model

Dynamic 3D Shape From Multi-viewpoint Images Using Deformable Mesh Model Dynamic 3D Shape From Multi-viewpoint Images Using Deformable Mesh Model Shohei Nobuhara Takashi Matsuyama Graduate School of Informatics, Kyoto University Sakyo, Kyoto, 606-8501, Japan {nob, tm}@vision.kuee.kyoto-u.ac.jp

More information

Multiview Reconstruction

Multiview Reconstruction Multiview Reconstruction Why More Than 2 Views? Baseline Too short low accuracy Too long matching becomes hard Why More Than 2 Views? Ambiguity with 2 views Camera 1 Camera 2 Camera 3 Trinocular Stereo

More information

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal

More information

Volumetric Scene Reconstruction from Multiple Views

Volumetric Scene Reconstruction from Multiple Views Volumetric Scene Reconstruction from Multiple Views Chuck Dyer University of Wisconsin dyer@cs cs.wisc.edu www.cs cs.wisc.edu/~dyer Image-Based Scene Reconstruction Goal Automatic construction of photo-realistic

More information

3D Digitization of a Hand-held Object with a Wearable Vision Sensor

3D Digitization of a Hand-held Object with a Wearable Vision Sensor 3D Digitization of a Hand-held Object with a Wearable Vision Sensor Sotaro TSUKIZAWA, Kazuhiko SUMI, and Takashi MATSUYAMA tsucky@vision.kuee.kyoto-u.ac.jp sumi@vision.kuee.kyoto-u.ac.jp tm@i.kyoto-u.ac.jp

More information

Distortion Correction for 3D Scan of Trunk Swaying Human Body Segments

Distortion Correction for 3D Scan of Trunk Swaying Human Body Segments Electronic Letters on Computer Vision and Image Analysis 7(4):51-61, 2009 Distortion Correction for 3D Scan of Trunk Swaying Human Body Segments Takuya Funatomi Masaaki Iiyama + Koh Kakusho Michihiko Minoh

More information

Constructing a 3D Object Model from Multiple Visual Features

Constructing a 3D Object Model from Multiple Visual Features Constructing a 3D Object Model from Multiple Visual Features Jiang Yu Zheng Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology Iizuka, Fukuoka 820, Japan Abstract This work

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Deformable Mesh Model for Complex Multi-Object 3D Motion Estimation from Multi-Viewpoint Video

Deformable Mesh Model for Complex Multi-Object 3D Motion Estimation from Multi-Viewpoint Video Deformable Mesh Model for Complex Multi-Object 3D Motion Estimation from Multi-Viewpoint Video Shohei NOBUHARA Takashi MATSUYAMA Graduate School of Informatics, Kyoto University Sakyo, Kyoto, 606-8501,

More information

Estimation of Reflection Properties of Silk Textile with Multi-band Camera

Estimation of Reflection Properties of Silk Textile with Multi-band Camera Estimation of Reflection Properties of Silk Textile with Multi-band Camera Kosuke MOCHIZUKI*, Norihiro TANAKA**, Hideaki MORIKAWA* *Graduate School of Shinshu University, 12st116a@shinshu-u.ac.jp ** Faculty

More information

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation. 3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly

More information

3D Reconstruction of Dynamic Textures with Crowd Sourced Data. Dinghuang Ji, Enrique Dunn and Jan-Michael Frahm

3D Reconstruction of Dynamic Textures with Crowd Sourced Data. Dinghuang Ji, Enrique Dunn and Jan-Michael Frahm 3D Reconstruction of Dynamic Textures with Crowd Sourced Data Dinghuang Ji, Enrique Dunn and Jan-Michael Frahm 1 Background Large scale scene reconstruction Internet imagery 3D point cloud Dense geometry

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

Calibrated Image Acquisition for Multi-view 3D Reconstruction

Calibrated Image Acquisition for Multi-view 3D Reconstruction Calibrated Image Acquisition for Multi-view 3D Reconstruction Sriram Kashyap M S Guide: Prof. Sharat Chandran Indian Institute of Technology, Bombay April 2009 Sriram Kashyap 3D Reconstruction 1/ 42 Motivation

More information

Jingyi Yu CISC 849. Department of Computer and Information Science

Jingyi Yu CISC 849. Department of Computer and Information Science Digital Photography and Videos Jingyi Yu CISC 849 Light Fields, Lumigraph, and Image-based Rendering Pinhole Camera A camera captures a set of rays A pinhole camera captures a set of rays passing through

More information

CREATING 3D WRL OBJECT BY USING 2D DATA

CREATING 3D WRL OBJECT BY USING 2D DATA ISSN : 0973-7391 Vol. 3, No. 1, January-June 2012, pp. 139-142 CREATING 3D WRL OBJECT BY USING 2D DATA Isha 1 and Gianetan Singh Sekhon 2 1 Department of Computer Engineering Yadavindra College of Engineering,

More information

Avatar Communication: Virtual Instructor in the Demonstration Exhibit

Avatar Communication: Virtual Instructor in the Demonstration Exhibit Avatar Communication: Virtual Instructor in the Demonstration Exhibit Tetsuro Ogi 1, 2, 3, Toshio Yamada 1, Takuro Kayahara 1, 2, Yuji Kurita 1 1 Telecommunications Advancement Organization of Japan 2

More information

Player Viewpoint Video Synthesis Using Multiple Cameras

Player Viewpoint Video Synthesis Using Multiple Cameras Player Viewpoint Video Synthesis Using Multiple Cameras Kenji Kimura *, Hideo Saito Department of Information and Computer Science Keio University, Yokohama, Japan * k-kimura@ozawa.ics.keio.ac.jp, saito@ozawa.ics.keio.ac.jp

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Efficient View-Dependent Sampling of Visual Hulls

Efficient View-Dependent Sampling of Visual Hulls Efficient View-Dependent Sampling of Visual Hulls Wojciech Matusik Chris Buehler Leonard McMillan Computer Graphics Group MIT Laboratory for Computer Science Cambridge, MA 02141 Abstract In this paper

More information

Heterogeneous Deformation Model for 3D Shape and Motion Recovery from Multi-Viewpoint Images

Heterogeneous Deformation Model for 3D Shape and Motion Recovery from Multi-Viewpoint Images Heterogeneous Deformation Model for 3D Shape and Motion Recovery from Multi-Viewpoint Images Shohei Nobuhara Takashi Matsuyama Graduate School of Informatics, Kyoto University Sakyo, Kyoto, 606-8501, Japan

More information

Multi-View Stereo for Static and Dynamic Scenes

Multi-View Stereo for Static and Dynamic Scenes Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.

More information

Point Cloud Streaming for 3D Avatar Communication

Point Cloud Streaming for 3D Avatar Communication 16 Point Cloud Streaming for 3D Avatar Communication Masaharu Kajitani, Shinichiro Takahashi and Masahiro Okuda Faculty of Environmental Engineering, The University of Kitakyushu Japan 1. Introduction

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about Image-Based Modeling and Rendering Image-Based Modeling and Rendering MIT EECS 6.837 Frédo Durand and Seth Teller 1 Some slides courtesy of Leonard McMillan, Wojciech Matusik, Byong Mok Oh, Max Chen 2

More information

Reconstruction of complete 3D object model from multi-view range images.

Reconstruction of complete 3D object model from multi-view range images. Header for SPIE use Reconstruction of complete 3D object model from multi-view range images. Yi-Ping Hung *, Chu-Song Chen, Ing-Bor Hsieh, Chiou-Shann Fuh Institute of Information Science, Academia Sinica,

More information

Incremental Observable-Area Modeling for Cooperative Tracking

Incremental Observable-Area Modeling for Cooperative Tracking Incremental Observable-Area Modeling for Cooperative Tracking Norimichi Ukita Takashi Matsuyama Department of Intelligence Science and Technology Graduate School of Informatics, Kyoto University Yoshidahonmachi,

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Real-Time 3D Shape Reconstruction, Dynamic 3D Mesh Deformation, and High Fidelity Visualization for 3D Video

Real-Time 3D Shape Reconstruction, Dynamic 3D Mesh Deformation, and High Fidelity Visualization for 3D Video Real-Time 3D Shape Reconstruction, Dynamic 3D Mesh Deformation, and High Fidelity Visualization for 3D Video T. Matsuyama, X. Wu, T. Takai, and S. Nobuhara Graduate School of Informatics, Kyoto University

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Light source estimation using feature points from specular highlights and cast shadows

Light source estimation using feature points from specular highlights and cast shadows Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps

More information

Lecture 8 Active stereo & Volumetric stereo

Lecture 8 Active stereo & Volumetric stereo Lecture 8 Active stereo & Volumetric stereo In this lecture, we ll first discuss another framework for describing stereo systems called active stereo, and then introduce the problem of volumetric stereo,

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,

More information

Some books on linear algebra

Some books on linear algebra Some books on linear algebra Finite Dimensional Vector Spaces, Paul R. Halmos, 1947 Linear Algebra, Serge Lang, 2004 Linear Algebra and its Applications, Gilbert Strang, 1988 Matrix Computation, Gene H.

More information

Real-Time Video-Based Rendering from Multiple Cameras

Real-Time Video-Based Rendering from Multiple Cameras Real-Time Video-Based Rendering from Multiple Cameras Vincent Nozick Hideo Saito Graduate School of Science and Technology, Keio University, Japan E-mail: {nozick,saito}@ozawa.ics.keio.ac.jp Abstract In

More information

SEOUL NATIONAL UNIVERSITY

SEOUL NATIONAL UNIVERSITY Fashion Technology 5. 3D Garment CAD-1 Sungmin Kim SEOUL NATIONAL UNIVERSITY Overview Design Process Concept Design Scalable vector graphics Feature-based design Pattern Design 2D Parametric design 3D

More information

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE Naho INAMOTO and Hideo SAITO Keio University, Yokohama, Japan {nahotty,saito}@ozawa.ics.keio.ac.jp Abstract Recently there has been great deal of interest

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO Stefan Krauß, Juliane Hüttl SE, SoSe 2011, HU-Berlin PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO 1 Uses of Motion/Performance Capture movies games, virtual environments biomechanics, sports science,

More information

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Ryusuke Homma, Takao Makino, Koichi Takase, Norimichi Tsumura, Toshiya Nakaguchi and Yoichi Miyake Chiba University, Japan

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo

CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein. Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein Lecture 23: Photometric Stereo Announcements PA3 Artifact due tonight PA3 Demos Thursday Signups close at 4:30 today No lecture on Friday Last Time:

More information

Image-Based Rendering

Image-Based Rendering Image-Based Rendering COS 526, Fall 2016 Thomas Funkhouser Acknowledgments: Dan Aliaga, Marc Levoy, Szymon Rusinkiewicz What is Image-Based Rendering? Definition 1: the use of photographic imagery to overcome

More information

PART-LEVEL OBJECT RECOGNITION

PART-LEVEL OBJECT RECOGNITION PART-LEVEL OBJECT RECOGNITION Jaka Krivic and Franc Solina University of Ljubljana Faculty of Computer and Information Science Computer Vision Laboratory Tržaška 25, 1000 Ljubljana, Slovenia {jakak, franc}@lrv.fri.uni-lj.si

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

Lambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science

Lambertian model of reflectance I: shape from shading and photometric stereo. Ronen Basri Weizmann Institute of Science Lambertian model of reflectance I: shape from shading and photometric stereo Ronen Basri Weizmann Institute of Science Variations due to lighting (and pose) Relief Dumitru Verdianu Flying Pregnant Woman

More information

Geometric Reconstruction Dense reconstruction of scene geometry

Geometric Reconstruction Dense reconstruction of scene geometry Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual

More information

Multi-View Matching & Mesh Generation. Qixing Huang Feb. 13 th 2017

Multi-View Matching & Mesh Generation. Qixing Huang Feb. 13 th 2017 Multi-View Matching & Mesh Generation Qixing Huang Feb. 13 th 2017 Geometry Reconstruction Pipeline RANSAC --- facts Sampling Feature point detection [Gelfand et al. 05, Huang et al. 06] Correspondences

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission

More information

Photometric stereo. Recovering the surface f(x,y) Three Source Photometric stereo: Step1. Reflectance Map of Lambertian Surface

Photometric stereo. Recovering the surface f(x,y) Three Source Photometric stereo: Step1. Reflectance Map of Lambertian Surface Photometric stereo Illumination Cones and Uncalibrated Photometric Stereo Single viewpoint, multiple images under different lighting. 1. Arbitrary known BRDF, known lighting 2. Lambertian BRDF, known lighting

More information

A Sketch Interpreter System with Shading and Cross Section Lines

A Sketch Interpreter System with Shading and Cross Section Lines Journal for Geometry and Graphics Volume 9 (2005), No. 2, 177 189. A Sketch Interpreter System with Shading and Cross Section Lines Kunio Kondo 1, Haruki Shizuka 1, Weizhong Liu 1, Koichi Matsuda 2 1 Dept.

More information

VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING

VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING Y. Kuzu a, O. Sinram b a Yıldız Technical University, Department of Geodesy and Photogrammetry Engineering 34349 Beşiktaş Istanbul, Turkey - kuzu@yildiz.edu.tr

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi, Francois de Sorbier and Hideo Saito Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku,

More information

Factorization Method Using Interpolated Feature Tracking via Projective Geometry

Factorization Method Using Interpolated Feature Tracking via Projective Geometry Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,

More information

Expanding gait identification methods from straight to curved trajectories

Expanding gait identification methods from straight to curved trajectories Expanding gait identification methods from straight to curved trajectories Yumi Iwashita, Ryo Kurazume Kyushu University 744 Motooka Nishi-ku Fukuoka, Japan yumi@ieee.org Abstract Conventional methods

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping Topic 12: Texture Mapping Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping Texture sources: Photographs Texture sources: Procedural Texture sources: Solid textures

More information

Multi-View 3D-Reconstruction

Multi-View 3D-Reconstruction Multi-View 3D-Reconstruction Cedric Cagniart Computer Aided Medical Procedures (CAMP) Technische Universität München, Germany 1 Problem Statement Given several calibrated views of an object... can we automatically

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

Prof. Trevor Darrell Lecture 18: Multiview and Photometric Stereo

Prof. Trevor Darrell Lecture 18: Multiview and Photometric Stereo C280, Computer Vision Prof. Trevor Darrell trevor@eecs.berkeley.edu Lecture 18: Multiview and Photometric Stereo Today Multiview stereo revisited Shape from large image collections Voxel Coloring Digital

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized Topic 11: Texture Mapping Motivation Sources of texture Texture coordinates Bump mapping, mip mapping & env mapping Texture sources: Photographs Texture sources: Procedural Texture sources: Solid textures

More information

3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption

3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO., 1 3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption Yael Moses Member, IEEE and Ilan Shimshoni Member,

More information

Introduction to Computer Vision

Introduction to Computer Vision Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics

More information

Image Based Reconstruction II

Image Based Reconstruction II Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View

More information

Measurement of Pedestrian Groups Using Subtraction Stereo

Measurement of Pedestrian Groups Using Subtraction Stereo Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Gesture Recognition using Temporal Templates with disparity information

Gesture Recognition using Temporal Templates with disparity information 8- MVA7 IAPR Conference on Machine Vision Applications, May 6-8, 7, Tokyo, JAPAN Gesture Recognition using Temporal Templates with disparity information Kazunori Onoguchi and Masaaki Sato Hirosaki University

More information

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison CHAPTER 9 Classification Scheme Using Modified Photometric Stereo and 2D Spectra Comparison 9.1. Introduction In Chapter 8, even we combine more feature spaces and more feature generators, we note that

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Topic 11: Texture Mapping 10/21/2015. Photographs. Solid textures. Procedural

Topic 11: Texture Mapping 10/21/2015. Photographs. Solid textures. Procedural Topic 11: Texture Mapping Motivation Sources of texture Texture coordinates Bump mapping, mip mapping & env mapping Topic 11: Photographs Texture Mapping Motivation Sources of texture Texture coordinates

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334

More information

MR-Mirror: A Complex of Real and Virtual Mirrors

MR-Mirror: A Complex of Real and Virtual Mirrors MR-Mirror: A Complex of Real and Virtual Mirrors Hideaki Sato 1, Itaru Kitahara 1, and Yuichi Ohta 1 1 Department of Intelligent Interaction Technologies, Graduate School of Systems and Information Engineering,

More information

International Conference on Communication, Media, Technology and Design. ICCMTD May 2012 Istanbul - Turkey

International Conference on Communication, Media, Technology and Design. ICCMTD May 2012 Istanbul - Turkey VISUALIZING TIME COHERENT THREE-DIMENSIONAL CONTENT USING ONE OR MORE MICROSOFT KINECT CAMERAS Naveed Ahmed University of Sharjah Sharjah, United Arab Emirates Abstract Visualizing or digitization of the

More information

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

Computer Graphics. Shading. Based on slides by Dianna Xu, Bryn Mawr College

Computer Graphics. Shading. Based on slides by Dianna Xu, Bryn Mawr College Computer Graphics Shading Based on slides by Dianna Xu, Bryn Mawr College Image Synthesis and Shading Perception of 3D Objects Displays almost always 2 dimensional. Depth cues needed to restore the third

More information

Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons. Abstract. 2. Related studies. 1.

Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons. Abstract. 2. Related studies. 1. Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons sub047 Abstract BTF has been studied extensively and much progress has been done for measurements, compression

More information

Color and Range Sensing for Hypermedia and Interactivity in Museums

Color and Range Sensing for Hypermedia and Interactivity in Museums Color and Range Sensing for Hypermedia and Interactivity in Museums R Baribeau and J.M. Taylor Analytical Research Services Canadian Conservation Institute Department of Communications Ottawa, Canada KIA

More information

CS770/870 Spring 2017 Animation Basics

CS770/870 Spring 2017 Animation Basics Preview CS770/870 Spring 2017 Animation Basics Related material Angel 6e: 1.1.3, 8.6 Thalman, N and D. Thalman, Computer Animation, Encyclopedia of Computer Science, CRC Press. Lasseter, J. Principles

More information

CS770/870 Spring 2017 Animation Basics

CS770/870 Spring 2017 Animation Basics CS770/870 Spring 2017 Animation Basics Related material Angel 6e: 1.1.3, 8.6 Thalman, N and D. Thalman, Computer Animation, Encyclopedia of Computer Science, CRC Press. Lasseter, J. Principles of traditional

More information

3D Modeling using multiple images Exam January 2008

3D Modeling using multiple images Exam January 2008 3D Modeling using multiple images Exam January 2008 All documents are allowed. Answers should be justified. The different sections below are independant. 1 3D Reconstruction A Robust Approche Consider

More information

The Law of Reflection

The Law of Reflection If the surface off which the light is reflected is smooth, then the light undergoes specular reflection (parallel rays will all be reflected in the same directions). If, on the other hand, the surface

More information

SCAPE: Shape Completion and Animation of People

SCAPE: Shape Completion and Animation of People SCAPE: Shape Completion and Animation of People By Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, James Davis From SIGGRAPH 2005 Presentation for CS468 by Emilio Antúnez

More information

Ligh%ng and Reflectance

Ligh%ng and Reflectance Ligh%ng and Reflectance 2 3 4 Ligh%ng Ligh%ng can have a big effect on how an object looks. Modeling the effect of ligh%ng can be used for: Recogni%on par%cularly face recogni%on Shape reconstruc%on Mo%on

More information

Visual Appearance: Reflectance Transformation Imaging (RTI) 22 Marzo 2018

Visual Appearance: Reflectance Transformation Imaging (RTI) 22 Marzo 2018 Visual Appearance: Reflectance Transformation Imaging (RTI) 22 Marzo 2018 LIGHT MATERIAL Visual Appearance Color due to the interaction between the lighting environment (intensity, position, ) and the

More information

Department of Computer Engineering, Middle East Technical University, Ankara, Turkey, TR-06531

Department of Computer Engineering, Middle East Technical University, Ankara, Turkey, TR-06531 INEXPENSIVE AND ROBUST 3D MODEL ACQUISITION SYSTEM FOR THREE-DIMENSIONAL MODELING OF SMALL ARTIFACTS Ulaş Yılmaz, Oğuz Özün, Burçak Otlu, Adem Mulayim, Volkan Atalay {ulas, oguz, burcak, adem, volkan}@ceng.metu.edu.tr

More information

Modeling and Packing Objects and Containers through Voxel Carving

Modeling and Packing Objects and Containers through Voxel Carving Modeling and Packing Objects and Containers through Voxel Carving Alex Adamson aadamson@stanford.edu Nikhil Lele nlele@stanford.edu Maxwell Siegelman maxsieg@stanford.edu Abstract This paper describes

More information

Final Review CMSC 733 Fall 2014

Final Review CMSC 733 Fall 2014 Final Review CMSC 733 Fall 2014 We have covered a lot of material in this course. One way to organize this material is around a set of key equations and algorithms. You should be familiar with all of these,

More information

Modeling, Combining, and Rendering Dynamic Real-World Events From Image Sequences

Modeling, Combining, and Rendering Dynamic Real-World Events From Image Sequences Modeling, Combining, and Rendering Dynamic Real-World Events From Image s Sundar Vedula, Peter Rander, Hideo Saito, and Takeo Kanade The Robotics Institute Carnegie Mellon University Abstract Virtualized

More information

Real-Time Video- Based Modeling and Rendering of 3D Scenes

Real-Time Video- Based Modeling and Rendering of 3D Scenes Image-Based Modeling, Rendering, and Lighting Real-Time Video- Based Modeling and Rendering of 3D Scenes Takeshi Naemura Stanford University Junji Tago and Hiroshi Harashima University of Tokyo In research

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Video-Rate Hair Tracking System Using Kinect

Video-Rate Hair Tracking System Using Kinect Video-Rate Hair Tracking System Using Kinect Kazumasa Suzuki, Haiyuan Wu, and Qian Chen Faculty of Systems Engineering, Wakayama University, Japan suzuki@vrl.sys.wakayama-u.ac.jp, {wuhy,chen}@sys.wakayama-u.ac.jp

More information