of 3D Models from Specular Motion Using Circular Lights
|
|
- Dale Carson
- 5 years ago
- Views:
Transcription
1 Reconstruction of 3D Models from Specular Motion Using Circular Lights Jiang Yu Zheng, Akio Murata, Yoshihiro Fukagawa, and1 Norihiro Abe Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology Iizuka, Fukuoka 820, Japan Abstract This work generalizes a shape recovery method for specular surjaces. We have used linear lights that generate planes of ray to highlight a rotating object so that its specular su$aces m computed from continuous images. By setting lights properly, shape at each rotation plane can be measured and a 30 graphics model can be subsequently reconstructed. In this paper, we extend our lights to circular ones to produce cones of ray. Under such illuminations, all su&ce points with different normals will be highlighted in a rotation period. The computation of shape is even simple compared with linear light case. We give results of shape recovery on real objects. 1. Introduction The current techniques of 3D shape reconstruction in vision can not deal with specular surfaces successfully. Even laser range finders are not able to get correct shape of a surface with strong specular reflectance[ 131. In the real world, however, many objects from industrial products to daily goods contain specular reflection. The objective of this work is to recover 3D shapes of specular surfaces by rotating them so as to generate graphics models for display. The constructed models will be useful in CAD, multimedia database, information network, and virtual reality. It is well known that, on a specular surface, motion of surface texture differs from motion of a reflected pattern of environment as view point shifts. This is a cue used by human in perceiving specularity. Estimation of 3D points on a specular surface has been studied on surface normal aspect in static image[ 1,2], on disparity aspect from two view points[3], on motion aspect with view point move [4,12] or object move[9,11], generating a surface curve or surfaces under a point or extended illumination. This work is an extension of the model recovery methods in [9,11]. We use circular lights to illuminate rotating objects. Every object point, regardless of its surface normal, will be highlighted once during the rotation. This allows us to recover more specular surfaces. Moreover, the generalization yields the simplicity in the geometry relation between normal, light and camera so that effective computation can be achieved. In the following sections, we will summarize the previous related works and derive light shape that can satisfy our needs of extension. Then we will apply our algorithms to different objects. Finally, we summarize the relation between illuminations, objects, and algorithms. 2. Previous Works Using Plane of Ray Under orthogonal projection, the newest results on 3D estimation of specular objects are 3D surface and model recovery by rotating objects [9.1 I], and the computation of a 3D surface curve by controlled view point move For a rotating object, our camera axis is set orthogonal to the rotation axis and continuous images are taken (Fig.1). In order to illuminate specular surface, we have put extended lights (linear, curved, or combination of linear lights around object) in a plane that contains the rotation axis. These lights are far away from the object compared with object size so that rays from a point of light to different surface points have the same direction. The plane is named as Plane of Ray, since the rays arriving at the object are almost in the plane. Fig. 1 Taking images of objects with specular reflection. Two kinds of highlight information help us know local and global surface shape. The first is the observed shape of highlight stripe and the second is its motion on surfaces due to the object rotation. These informations can be parameterized by normal direction of highlight stripe in the image and trajectory of highlight in the image sequence. For each rotation plane at different height, we can determine, in our camera configuration, an image line that is the projection of the plane. An Epipolar-plane image is collected from the line and the motion of highlights is tracked in the EPI. One of such example is given in Fig. 2. As the object rotates, a highlight moves over surfaces gradually. We have computed 3D positions of the passed points on each rotation plane; using moving trajectory of highlight in the FPI and corresponding highlight directions (or gradient direction at highlight stripe) in the images[l4]. With one or two planes of ray setting at various directions from the camera axis, we give a summary of our algorithms (Table 1) for different types of shapes. If a single plane of ray Y /96 $ IEEE Proceedings of ICPR
2 is set, the shape recovery is a 1st order differential equation. Multiple planes of ray located at orientation $t and $2 from the camera axis will make the problem simple as linear equations. If the object is a cylinder or the plane of ray aligns with the camera direction, extraction of normal of highlight in the image is unnecessary. All the surface points are possible to be highlighted only in the case where the plane of ray is with the ny along the rotation axis is constant during the rotation and the component (n,, n,) in the rotation plane faces to all orientations periodically. The extended illuminant we select will make every point be highlighted when its normal component (n,, n,) rotates to a designated angle; regardless what its ny component is. We can thus degenerate 3D reconstruction problem to 2D recovery problem in each rotation plane. We will figure out the shape of illuminant using a Gaussian Sphere. R(Y) Fig. 2 Epipolar Plane Image parallel to rotation plane on u which a trace of highlight can be observed. Single Plane of Ray Plane of ray Multiple (linear lights) lane Of Ray Light f = 0 Fig. 3 Gaussian Sphere showing relations between light, Cylinder by==o) surface normal and viewing direction. linear 1st order differential I equations I equation & fixed points As depicted in Fig. 3, a surface normal N is displayed, without its real position on object, on the sphere at the system C-xyz. The camera direction is V(O,O,l). We want a point being highlighted when its nxz rotates toward a given angle, Table 1. Shape recovery under different conditions such as object shape, number of plane of ray and their locations. ny is the component of surface normal along the rotation axis. 3 Generalization on Light Shape What is an ideal illumination? A point light can only recover a surface curve. Generating a complete model needs as many highlight points as possible in a rotation. We will use circular lights, which not only satisfies the requirement but also omits the computation of highlight shape in the image. The shape recovery is thus done on each EPI separately, which is more stable and efficient. We locate object coordinates system O-XYZ so that its Y axis is on the rotation axis. The rotation angle 0 is known and is clockwise in the rotation plane. Through a simple computation, we can detect the rotation axis in the image frame. The image can be transformed so that its y axis is on the projected rotation axis. As an object rotates, a spatialtemporal volume can be piled from its continuous images. A surface point P(X, Y, Z) in the system 0-XYZ is mapped into the volume continuously. It forms a trajectory p(x(0), y, 0) in the volume, where y in the camera coordinate system C-xyz is equal to Y constantly. Surface normal at the point is denoted as N(n,, ny, n,) in the system 0-XYZ. A ray from a point of light is denoted as L(l,,l,,l,) in the system C-xyz. The component say $I/~E (-n/2, n/2), in the rotation plane. Surface normal N is then in the vertical plane P-RON0 where LVONo=+/2. According to the specular reflection criterion, the ray L highlighting the point is in plane P-VON and has angle of incident LLON equal to angle of reflection LNOV. Changing N from -n to +K in P-RONu, plane P-VON will rotate around V. We can deduce that ray L draws a circle on the Gaussian Sphere symmetry to the horizontal plane and through vector V and LO (LLfjOV=$). The way of proof is given in the following. According to constraint of specular reflection LLON = LNOV (1) we have the product of vectors as LxN=Nx(-V) G9, where N is a unit vector in P-RON0 with changeable n,,. It can be written as (( 1 -n,*) *sin+/2, ny, -( 1 -n,*)* *cosq/2). From Eq. 2, we obtain three sub-equations. Eliminating ny in the sub-equations, we can deduce that the direction of light L(l,,l,,l,) satisfies 1, = 1, ctg(q/2) +l (3)7 which means vector L is in a plane parallel to the rotation axis (y axis). Because L is also a unit vector on the sphere, it must be on the intersection of the sphere and the plane, which draws a circle. The rays from the light compose a cone, named Cone of Ray. Fortunately, we can set a circular light to realize the illumination. The circular light should be set symmetrically to the xz plane in the system C-xyz. The two points on both 870
3 circular light and X2 plane should be located (1) on camera axis and orientation $ from the camera axis respectively, and (2) having the same distance from the rotation axis. A large light compared with object size is preferred, because its rays to object are close to ideal rays centralized at the sphere center. Given a light with known diameter y and selecting an orientation $/2 for catching highlight on objects, the light can be set uniquely with the two points at distance (y/2)scs$/2 from the rotation axis, and at orientations I$ and I[: from the camera axis respectively. stereo, except the focused points are highlights other than edges from texture or corners. The computation is very simple and the result is only related to light setting, rotation angle, and image positions of highlights. l 4 Shape Estimation under Two Cones of Ray The circular light guarantees that a surface point will be highlighted when its normal rotates to a given orientation. From the camera geometry (Fig. 4). we know the viewing direction under orthogonal projection is V(-sine, 0, cos0) in system O-XYZ. The image position of a point viewed at the rotation angle 8 can be written as x(e) = P x = X(e) cost3 + z(e) sine (4), where x is the unit vector of the horizontal image axis. Obviously, the position that reflects the light to the camera will shift on surfaces during rotation (surface points in turn pass the direction $/2). I,highlight traces LvOI,,; SpLial circular lights Temporal orthogonal projection Image frame )a Fig. 4 Geometry of camera and object in the rotation plane. If we set two circular lights passing through V and at the directions $1 and $2 from the camera axis as Fig. 5 shows, an arbitrary point will reflects them to camera when its normal rotates in turn to two vertical planes which have angles 4rt/2 from the camera axis respectively. The point has a delay Ar$=$t/2-$2/2 of being highlighted. Two lines of sight through the projected highlights will cross at the surface point geometrically, which determines the point position as shown in Fig. 6. The equations of lines of sight could be written as xl(e) = X(9) c0se + Z(e) sin0 (5) x@+a$) = X(e) cos(b+a$) + Z(0) sin(fl+a$) (a which yield the position of the surface just as i u ;;)=zk&z$t..:;) ;:::[x2$!$j(, where X(e) and Z(e) are the coordinates of surface points at the rotation plane. The principle used here is similar to motion EPI (b) Fig. 5 Two circular lights set to produce two cones of ray. (a) light and system setting. (b) top view of light setting. In the EPI, a surface point has its moving trace as sinusoidal function of the rotation angle according to Eq. 1, even it is unable to locate visually in the image. To know 3D position of a point needs to match two projections on the trajectory. Two highlighted positions prolvide the information. The sinusoidal trajectory crosses two highlight traces when the xz component of the normal for the point rotates to and $~/2. We thus track a highlight trace and find correspondence of each point at the second highlight trace that has delay A$ (Fig. 7). The detection and tracking of highlight has been in [9,11,14]. Any zero curvature point (appearing on planes) on the rotation plane will yield two horizontal segments in the EPI; having the same delay A@ in reflecting two lights. By measuring the difference of 9 between the two segments, we calibrate the difference in the orientation of two lights described above. 871
4 5. Shape Estimation under a Cone of Ray With one circular light, surface points with any kinds of normals can be computed by solving a 1st order differential equation as what has been used for cylinder (Table. 1). The equations are similar as [ 141 and are written as ~cosp+zsin (,so =$+in(o+~)+~tanqsin(o+~) *+- (8) ii6 2 zcosp /OS!,, 6+- =~cos(6+f)-xcrg6cos(6+~) CM 2- in the domains ti#tn/2, 3x/2, and &O, n, respectively. We can compute their solutions and corresponding X and Z using Eq. 4. Objects can be in general shape, and the light can be set free other than aligning with the camera. The light around an object at the camera orientation given in Table 1 is one of our case here where the circular light is set at the direction $=O. The cone of ray degenerates to plane of ray. Table 2 summarizes the algorithms used under cone of ray illumination. The computation can be carried out on each EPI so that it can achieve a good performance. circular fluorescent lights are commercial products with diameter 32cm. The image projection of the rotation axis is determined by imaging a standing pole standing at the center of the turn table. The accurate light direction A$ is then calibrated using EPI of a vertical planer mirror at the center of turn table. The camera is at about 2.5m away from the object and a lens with long focal length is used. On principle, every point is able to be highlighted once under a circular light in a rotation period. However, if an object has very deep concave shape, the ray might be occluded so that no highlight is observable on such surface. Occlusion in stereo also happens here. Figure 8 gives an example of a specular sphere under two cones of ray and its recovered shape displayed in graphics. We can notice the result is very accurate. Fig. 8 Test of our algorithm on a sphere. (a) Original image. (b) recovered model. Fig. 6 Two lines of sights cross at the surface point. X 64 (b) irfate I point conto tor Fig. 7 Tracking highlight traces in an EPI and matching Table 2. Shape recovery method under different lights for different types of objects. 6. Experimental Results We have done experiments on real objects. The system setting for modeling objects is displayed in Fig. 5. Two (d) Fig. 9 A bottle made of metal and its recovered 3D model. (a) Original image. (b) Recovered whole model. (c) Top part of the object in wire frame. (d) top part model in shading. Figure 9 and 10 show two objects and their established models. One is a bottle made of metal (Fig. 9(a)) that has surface normal at all directions. One of its EPI is shown in Fig. 2. The reconstructed model is shown in Fig. 9(b). The overall shape is recovered well, except the place with high ny. This is because the bottle size is relatively large compared with the circular lights. The rays arriving at such a place are not as good as ideal cone of ray. We take top part of the bottle and put it 872
5 as ideal cone of ray. We take top part of the bottle and put it near the rotation center. The computed model is displayed by shading mode Fig. 9(c,d) and wire frame mode in Fig. 9(e). Those shapes in Fig. 9(b) are improved. Figure 10 shows a plastic toy and its recovered models. Although the specular reflection component is not strong, we can still recover major part of the object displayed in Fig. IO(c)(d). The real acquired data can have almost 512x360 (image size x rotation angle) points. The unevenness on the model is due to our limited resolution in model display. Only 113 of the real measured data are used in the display. The surface will be very smooth if data are all used. The missing parts on the model are because unclearness of highlights on weak specular surfaces, occlusion of ray by other parts, and failure in highlight extraction. The stronger the specular reflection, the better the result is. For those objects with weak specular reflectance, highlight tracking is more difficult than those shining metal objects. Also for a deep valley on a surface, highlight may be occluded by other parts. Moreover, uneven surfaces may cause difficulty in tracking waving highlight traces in EPI because of resolution limitation. 7. Conclusion In this paper, we generalized a method of shape recovery for objects with specular reflectance. By using circular lights to illuminate rotating objects, every point on objects is possible to be. highlighted at a desired orientation regardless of its surface normal. General types of objects are able to be measured and modeled with a simple computation. Light position is free compared with the previous method of using linear lights. We have also done experiments on real objects and obtained good results. References [l] K. Ikeuchi, Determining surface orientations of specular surfaces by using the photometric stereo method, IEEE PAMI, Vol. 3, No. 6, , [2] G. Healey and T. Binford, Local shape from specularity, ICCV, pp.lsl [3] A. Black, G. Brelstaff, Geometry from specularities, 2nd ICCV, pp , [4] A. Zisserman, P. Giblin and A. Blake, The information available to a moving observer from specularities, Image and visual computing, 7, pp , [5] H. Baker, and R. Belles, Generalizing epipolar-plane image analysis on the spatiotemporal surface, CVPR-88, pp.2-9, [6] P. K. Horn, Shape from shading The MIT Press [7] J. Y. Zheng, and F. Kishino, Verifying and combining different visual cues into a complete 3D model, CVPR92, pp , [8] J.Y. Zheng, Acquiring 3D models from sequences of contours, IEEEPAMI, Vol.16, No.2, Feb. pp.l63-178, [9] J. Y. Zheng, Y. Fukagawa, T. Ohtsuka, N. Abe, Acquiring 3D models from rotation and a highlight, 12th ICPR, Vol. 1, pp , [IO] J. Y. Zheng, H. Kakinoki, K. Tanaka, N. Abe, Acquiring 3D models from fixed points during rotation, ICARCV 94, Vol. 1, pp , [11] J. Y. Zheng, Y. Fukagawa, N. Abe, Shape and Model from Specular Motion, 5th ICCV, pp.92-97, [ 121 M. Oren, S. K. Nayar, A theory of specular surface geometry, 5th ICCV, pp , [13] J. Clark, E. Trucco and H. Cheung, Improving laser triangulation sensors using polarization, 5th ICCV, pp , [ 141 J. Y. Zheng, Y. Fukagawa, N. Abe, 3D surface estimation and model construction from specular motion, submitted to IEEE Trans. PAMI, (c) extracted highlight traces. ( I I I am-l (e) estimated shape at the corresponding rotation plane. (d) matching points on highlight traces. (f) recovered 3D surface Matched segments are connected. model. (la (h) (9 - Fig. 10 A plastic toy and. its recovered model under two cones of ray illumination.
Acquiring a Complete 3D Model from Specular Motion under the Illumination of Circular-Shaped Light Sources
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO. 8, AUGUST 000 913 Acquiring a Complete 3D Model from Specular Motion under the Illumination of Circular-Shaped Light Sources Jiang
More informationConstructing a 3D Object Model from Multiple Visual Features
Constructing a 3D Object Model from Multiple Visual Features Jiang Yu Zheng Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology Iizuka, Fukuoka 820, Japan Abstract This work
More informationComputing 3 odels of Rotating Objects from Moving Shading
Computing 3 odels of Rotating Objects from Moving Shading Jiang Yu Zheng, Hiroshi Kakinoki, Kazuaki Tanaka and Norihiro Abe Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology
More informationShape and Model from Specular Motion
Shape and Model from Specular Motion Jiang Y u Zheng, Yoshihiro Fukagawa and Norihiro Abe Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology, Iizuka, Fukuoka 820, Japan
More information3D Models from Contours: Further Identification of Unexposed Areas
3D Models from Contours: Further Identification of Unexposed Areas Jiang Yu Zheng and Fumio Kishino ATR Communication Systems Research Laboratory 2-2 Hikaridai, Seika, Soraku, Kyoto 619-02, Japan Abstract
More informationAcquiring 3D Models from Rotation and Highlights
Acquiring 3D Models from Rotation and Highlights Jiang Y U Zheng, Y oshihiro Fukagawa, Tetsuo Ohtsuka and Norihiro Abe Faculty of Computer Science and Systems Engineering Kyushu nstitute of Technology
More informationGenerating Dynamic Projection Images for Scene Representation and Understanding
COMPUTER VISION AND IMAGE UNDERSTANDING Vol. 72, No. 3, December, pp. 237 256, 1998 ARTICLE NO. IV980678 Generating Dynamic Projection Images for Scene Representation and Understanding Jiang Yu Zheng and
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More information1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)
Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationHow to Compute the Pose of an Object without a Direct View?
How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationPassive 3D Photography
SIGGRAPH 99 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University http:// ://www.cs.cmu.edu/~seitz Talk Outline. Visual Cues 2. Classical Vision Algorithms 3. State of
More informationGeneral specular Surface Triangulation
General specular Surface Triangulation Thomas Bonfort, Peter Sturm, and Pau Gargallo MOVI - GRAVIR - INRIA - 38330 Montbonnot, FRANCE http://perception.inrialpes.fr Abstract. We present a method for the
More informationA Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles
A Desktop 3D Scanner Exploiting Rotation and Visual Rectification of Laser Profiles Carlo Colombo, Dario Comanducci, and Alberto Del Bimbo Dipartimento di Sistemi ed Informatica Via S. Marta 3, I-5139
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric
More informationA Combinatorial Transparent Surface Modeling from Polarization Images
A Combinatorial Transparent Surface Modeling from Polarization Images Mohamad Ivan Fanany 1, Kiichi Kobayashi 1, and Itsuo Kumazawa 2 1 NHK Engineering Service Inc., 1-10-11 Kinuta Setagaya-ku Tokyo, Japan,
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationVectors and the Geometry of Space
Vectors and the Geometry of Space In Figure 11.43, consider the line L through the point P(x 1, y 1, z 1 ) and parallel to the vector. The vector v is a direction vector for the line L, and a, b, and c
More informationA Theory of Specular Surface Geometry
International Journal of Computer Vision 4(), 105 14 (1996) c 1996 Kluwer Academic Publishers. Manufactured in The Netherlands. A Theory of Specular Surface Geometry MICHAEL OREN AND SHREE K. NAYAR Department
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationOptics II. Reflection and Mirrors
Optics II Reflection and Mirrors Geometric Optics Using a Ray Approximation Light travels in a straight-line path in a homogeneous medium until it encounters a boundary between two different media The
More information3D shape from the structure of pencils of planes and geometric constraints
3D shape from the structure of pencils of planes and geometric constraints Paper ID: 691 Abstract. Active stereo systems using structured light has been used as practical solutions for 3D measurements.
More informationPassive 3D Photography
SIGGRAPH 2000 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University University of Washington http://www.cs cs.cmu.edu/~ /~seitz Visual Cues Shading Merle Norman Cosmetics,
More informationPHY 171 Lecture 6 (January 18, 2012)
PHY 171 Lecture 6 (January 18, 2012) Light Throughout most of the next 2 weeks, we will be concerned with the wave properties of light, and phenomena based on them (interference & diffraction). Light also
More informationLight source estimation using feature points from specular highlights and cast shadows
Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps
More informationPanoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints
Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints Wei Jiang Japan Science and Technology Agency 4-1-8, Honcho, Kawaguchi-shi, Saitama, Japan jiang@anken.go.jp
More informationChapter 26 Geometrical Optics
Chapter 26 Geometrical Optics 1 Overview of Chapter 26 The Reflection of Light Forming Images with a Plane Mirror Spherical Mirrors Ray Tracing and the Mirror Equation The Refraction of Light Ray Tracing
More informationAUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER
AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal
More informationThe Law of Reflection
If the surface off which the light is reflected is smooth, then the light undergoes specular reflection (parallel rays will all be reflected in the same directions). If, on the other hand, the surface
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationRecovering light directions and camera poses from a single sphere.
Title Recovering light directions and camera poses from a single sphere Author(s) Wong, KYK; Schnieders, D; Li, S Citation The 10th European Conference on Computer Vision (ECCV 2008), Marseille, France,
More informationCalypso Construction Features. Construction Features 1
Calypso 1 The Construction dropdown menu contains several useful construction features that can be used to compare two other features or perform special calculations. Construction features will show up
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationGeneral Principles of 3D Image Analysis
General Principles of 3D Image Analysis high-level interpretations objects scene elements Extraction of 3D information from an image (sequence) is important for - vision in general (= scene reconstruction)
More informationShape from Specular Flow: Is One Flow Enough?
Shape from Specular Flow: Is One Flow Enough? Yuriy Vasilyev 1 Todd Zickler 1 Steven Gortler 1 Ohad Ben-Shahar 2 1 Harvard School of Engineering and Applied Sciences 2 Computer Science Department, Ben-Gurion
More informationHigh Accuracy Depth Measurement using Multi-view Stereo
High Accuracy Depth Measurement using Multi-view Stereo Trina D. Russ and Anthony P. Reeves School of Electrical Engineering Cornell University Ithaca, New York 14850 tdr3@cornell.edu Abstract A novel
More informationINFERENCE OF SEGMENTED, VOLUMETRIC SHAPE FROM INTENSITY IMAGES
INFERENCE OF SEGMENTED, VOLUMETRIC SHAPE FROM INTENSITY IMAGES Parag Havaldar and Gérard Medioni Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, California
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More information3D Object Model Acquisition from Silhouettes
4th International Symposium on Computing and Multimedia Studies 1 3D Object Model Acquisition from Silhouettes Masaaki Iiyama Koh Kakusho Michihiko Minoh Academic Center for Computing and Media Studies
More informationSection 2 Flat Mirrors. Distinguish between specular and diffuse reflection of light. Apply the law of reflection for flat mirrors.
Section 2 Flat Mirrors Objectives Distinguish between specular and diffuse reflection of light. Apply the law of reflection for flat mirrors. Describe the nature of images formed by flat mirrors. Section
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationStereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo
Stereo Viewing Stereo Stereograms Autostereograms Depth from Stereo 3D (Depth) from 2D 3D information is lost by projection. How do we recover 3D information? Image 3D Model Depth Cues Shadows: Occlusions:
More informationChapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light.
Chapter 7: Geometrical Optics The branch of physics which studies the properties of light using the ray model of light. Overview Geometrical Optics Spherical Mirror Refraction Thin Lens f u v r and f 2
More information16.6. Parametric Surfaces. Parametric Surfaces. Parametric Surfaces. Vector Calculus. Parametric Surfaces and Their Areas
16 Vector Calculus 16.6 and Their Areas Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. and Their Areas Here we use vector functions to describe more general
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationLab 10 - GEOMETRICAL OPTICS
L10-1 Name Date Partners OBJECTIVES OVERVIEW Lab 10 - GEOMETRICAL OPTICS To examine Snell s Law. To observe total internal reflection. To understand and use the lens equations. To find the focal length
More informationPolarization-based Transparent Surface Modeling from Two Views
Polarization-based Transparent Surface Modeling from Two Views Daisuke Miyazaki Masataka Kagesawa y Katsushi Ikeuchi y The University of Tokyo, Japan http://www.cvl.iis.u-tokyo.ac.jp/ Abstract In this
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationAcquiring 3-D Models from Sequences of Contours
IEEE TRANSACTIONS ON PATI ERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 16, NO. 2, FEBRUARY 1994 163 Acquiring 3-D Models from Sequences of Contours Jiang Yu Zheng Abshulcf-This paper explores shape from
More informationA High Speed Face Measurement System
A High Speed Face Measurement System Kazuhide HASEGAWA, Kazuyuki HATTORI and Yukio SATO Department of Electrical and Computer Engineering, Nagoya Institute of Technology Gokiso, Showa, Nagoya, Japan, 466-8555
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationMathematics of a Multiple Omni-Directional System
Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba
More informationUnderstanding Variability
Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion
More informationCatadioptric camera model with conic mirror
LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationLight and the Properties of Reflection & Refraction
Light and the Properties of Reflection & Refraction OBJECTIVE To study the imaging properties of a plane mirror. To prove the law of reflection from the previous imaging study. To study the refraction
More informationReflection and Image Formation by Mirrors
Purpose Theory a. To study the reflection of light Reflection and Image Formation by Mirrors b. To study the formation and characteristics of images formed by different types of mirrors. When light (wave)
More information3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,
3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationMETRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS
METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires
More informationLIGHT. Speed of light Law of Reflection Refraction Snell s Law Mirrors Lenses
LIGHT Speed of light Law of Reflection Refraction Snell s Law Mirrors Lenses Light = Electromagnetic Wave Requires No Medium to Travel Oscillating Electric and Magnetic Field Travel at the speed of light
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationLecture Outlines Chapter 26
Lecture Outlines Chapter 26 11/18/2013 2 Chapter 26 Geometrical Optics Objectives: After completing this module, you should be able to: Explain and discuss with diagrams, reflection and refraction of light
More informationDepth. Chapter Stereo Imaging
Chapter 11 Depth Calculating the distance of various points in the scene relative to the position of the camera is one of the important tasks for a computer vision system. A common method for extracting
More informationNicholas J. Giordano. Chapter 24. Geometrical Optics. Marilyn Akins, PhD Broome Community College
Nicholas J. Giordano www.cengage.com/physics/giordano Chapter 24 Geometrical Optics Marilyn Akins, PhD Broome Community College Optics The study of light is called optics Some highlights in the history
More informationPhysics for Scientists & Engineers 2
Geometric Optics Physics for Scientists & Engineers 2 Spring Semester 2005 Lecture 36! The study of light divides itself into three fields geometric optics wave optics quantum optics! In the previous chapter,
More informationEpipolar Geometry and the Essential Matrix
Epipolar Geometry and the Essential Matrix Carlo Tomasi The epipolar geometry of a pair of cameras expresses the fundamental relationship between any two corresponding points in the two image planes, and
More informationLecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19
Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line
More informationAP Physics: Curved Mirrors and Lenses
The Ray Model of Light Light often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization, but is very useful for geometric
More informationLecture Outline Chapter 26. Physics, 4 th Edition James S. Walker. Copyright 2010 Pearson Education, Inc.
Lecture Outline Chapter 26 Physics, 4 th Edition James S. Walker Chapter 26 Geometrical Optics Units of Chapter 26 The Reflection of Light Forming Images with a Plane Mirror Spherical Mirrors Ray Tracing
More informationA Stratified Approach for Camera Calibration Using Spheres
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. Y, MONTH YEAR 1 A Stratified Approach for Camera Calibration Using Spheres Kwan-Yee K. Wong, Member, IEEE, Guoqiang Zhang, Student-Member, IEEE and Zhihu
More informationTransparent Object Shape Measurement Based on Deflectometry
Proceedings Transparent Object Shape Measurement Based on Deflectometry Zhichao Hao and Yuankun Liu * Opto-Electronics Department, Sichuan University, Chengdu 610065, China; 2016222055148@stu.scu.edu.cn
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Prasanna Sahoo Department of Mathematics University of Louisville 1 More on Single View Geometry Lecture 11 2 In Chapter 5 we introduced projection matrix (which
More informationSTRUCTURE AND MOTION ESTIMATION FROM DYNAMIC SILHOUETTES UNDER PERSPECTIVE PROJECTION *
STRUCTURE AND MOTION ESTIMATION FROM DYNAMIC SILHOUETTES UNDER PERSPECTIVE PROJECTION * Tanuja Joshi Narendra Ahuja Jean Ponce Beckman Institute, University of Illinois, Urbana, Illinois 61801 Abstract:
More informationSilhouette Coherence for Camera Calibration under Circular Motion
Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Appendix I 2 I. ERROR ANALYSIS OF THE SILHOUETTE COHERENCE AS A FUNCTION OF SILHOUETTE
More informationSupplementary Figure 1 Optimum transmissive mask design for shaping an incident light to a desired
Supplementary Figure 1 Optimum transmissive mask design for shaping an incident light to a desired tangential form. (a) The light from the sources and scatterers in the half space (1) passes through the
More informationLecture Notes (Reflection & Mirrors)
Lecture Notes (Reflection & Mirrors) Intro: - plane mirrors are flat, smooth surfaces from which light is reflected by regular reflection - light rays are reflected with equal angles of incidence and reflection
More informationL2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods
L2 Data Acquisition Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods 1 Coordinate Measurement Machine Touch based Slow Sparse Data Complex planning Accurate 2
More informationRefraction and Lenses. Honors Physics
Refraction and Lenses Honors Physics Refraction Refraction is based on the idea that LIGHT is passing through one MEDIUM into another. The question is, WHAT HAPPENS? Suppose you are running on the beach
More informationMapping textures on 3D geometric model using reflectance image
Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi Ikeuchi The University of Tokyo Cyra Technologies, Inc. The University of Tokyo fkurazume,kig@cvl.iis.u-tokyo.ac.jp
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationAffine Surface Reconstruction By Purposive Viewpoint Control
Affine Surface Reconstruction By Purposive Viewpoint Control Kiriakos N. Kutulakos kyros@cs.rochester.edu Department of Computer Sciences University of Rochester Rochester, NY 14627-0226 USA Abstract We
More informationAutomatic Feature Extraction of Pose-measuring System Based on Geometric Invariants
Automatic Feature Extraction of Pose-measuring System Based on Geometric Invariants Yan Lin 1,2 Bin Kong 2 Fei Zheng 2 1 Center for Biomimetic Sensing and Control Research, Institute of Intelligent Machines,
More informationOptics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles.
Optics 1- Light Nature: a- Before the beginning of the nineteenth century, light was considered to be a stream of particles. The particles were either emitted by the object being viewed or emanated from
More informationStevens High School AP Physics II Work for Not-school
1. Gravitational waves are ripples in the fabric of space-time (more on this in the next unit) that travel at the speed of light (c = 3.00 x 10 8 m/s). In 2016, the LIGO (Laser Interferometry Gravitational
More informationdq dt I = Irradiance or Light Intensity is Flux Φ per area A (W/m 2 ) Φ =
Radiometry (From Intro to Optics, Pedrotti -4) Radiometry is measurement of Emag radiation (light) Consider a small spherical source Total energy radiating from the body over some time is Q total Radiant
More information3D Reconstruction from Scene Knowledge
Multiple-View Reconstruction from Scene Knowledge 3D Reconstruction from Scene Knowledge SYMMETRY & MULTIPLE-VIEW GEOMETRY Fundamental types of symmetry Equivalent views Symmetry based reconstruction MUTIPLE-VIEW
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationThe Ray model of Light. Reflection. Class 18
The Ray model of Light Over distances of a terrestrial scale light travels in a straight line. The path of a laser is now the best way we have of defining a straight line. The model of light which assumes
More informationPerceived shininess and rigidity - Measurements of shape-dependent specular flow of rotating objects
Perceived shininess and rigidity - Measurements of shape-dependent specular flow of rotating objects Katja Doerschner (1), Paul Schrater (1,,2), Dan Kersten (1) University of Minnesota Overview 1. Introduction
More informationTD2 : Stereoscopy and Tracking: solutions
TD2 : Stereoscopy and Tracking: solutions Preliminary: λ = P 0 with and λ > 0. If camera undergoes the rigid transform: (R,T), then with, so that is the intrinsic parameter matrix. C(Cx,Cy,Cz) is the point
More informationDEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION
2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent
More informationChapter 7: Geometrical Optics
Chapter 7: Geometrical Optics 7. Reflection at a Spherical Surface L.O 7.. State laws of reflection Laws of reflection state: L.O The incident ray, the reflected ray and the normal all lie in the same
More informationEpipolar geometry contd.
Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives
More informationCV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more
CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured
More informationThree-Dimensional Computer Vision
\bshiaki Shirai Three-Dimensional Computer Vision With 313 Figures ' Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Table of Contents 1 Introduction 1 1.1 Three-Dimensional Computer Vision
More informationEECS 442 Computer vision. Announcements
EECS 442 Computer vision Announcements Midterm released after class (at 5pm) You ll have 46 hours to solve it. it s take home; you can use your notes and the books no internet must work on it individually
More information