Acquiring 3D Models from Rotation and Highlights
|
|
- Ambrose Arnold
- 5 years ago
- Views:
Transcription
1 Acquiring 3D Models from Rotation and Highlights Jiang Y U Zheng, Y oshihiro Fukagawa, Tetsuo Ohtsuka and Norihiro Abe Faculty of Computer Science and Systems Engineering Kyushu nstitute of Technology Kawazu, izuka, Fukuoka 820, Japan Abstract Thts paper proposes an approach to acquire 3D models of objects with specular reflectance for graphcs use. Highlight and rotation informations are employed in the model recovery. We control object rotation and extract motion of a hghlight stripe, from which the object shape can be qualitatively inferred and quantitatively reconstructed by solving a first-order linear differential equation. We have experimented on simulated and real objects to obtain their models. 1. ntroduction The objective of ths work is to establish a 3D graphcs model of an object when it is rotated. For an object with rich texture, one can use shape from motion in model recovering [3,4]. For objects with only convex surfaces, shape from contour method is applicable [l]. n this paper, we deal with smooth surfaces with specular reflectance which may yield hghlights. A hghlight has been considered as a noise of surface features and some effort has been made to eliminate it from the object surface. However, in the case where few texture appears on a smooth surface, highlights plays an important rule in perceiving a shape. n order to recover a complete model, we put the object on a turn table and rotate it so that it reveals all of its surfaces to camera. The rotation is readable. The direction of a static illuminant is known by a simple calibration. The issue becomes shape from shadmg and known rotation. To simplify the problem, we use orthogonal projection and parallel illumination. At present, we assume objects have smooth surfaces. The motion information is taken from Epipolar-Plane mages parallel to the rotation plane. Two lunds of visual features fixed features either from comer points or texture points, and hghlights are considered to use. Among them, the hghlight gradually shift on object surface during the rotation; the motion of it can be tracked in EPs at different heights. From obtained trajectories, object surfaces can be calculated. t is not difficult to realize that normal can be determined from hghlights shifting on the surfaces during the rotation. As to the surface itself, we found that a smooth surface can be described by a first-order differential equation which has a unique solution. Combining those fixed features as boundary condition, we can finally reconstruct the model. n the following, we will introduce some qualitative characteristics of highlight trajectory and the principle of shape recovery scheme. Then we describe how to extract those necessary information from continuous images. Finally, experimental results are &scussed. 2 Motion Characteristics of Features 2.1 Motion of Fixed Points The system setting is &splayed in Fig. 1. When an object rotates around a known axis, its continuous images are taken and a spatial-temporal volume is piled. The y axis of the image frame is set parallel to the rotation axis using a simple calibration and the optical axis of camera goes through the rotation axis. An object point Po(, Y, Z) described in the object centered coordinates system is projected as p(x,y,8) in the volume. The surface normal at the point is n=[nx, ny, nd. The rotation angle 8 is known and is clockwise in the analysis. A linear illuminant set in the vertical hrection is long enough to produce highlights for surface normals with different n, components. t locates at distant position so that the horizontal components of incident lights are written as an approximate vector L=[L,,L.J. The components L, and L, can be further denoted by its angle $0 from the camera direction, whch can be written as L=[-sincpo, coscpo] and $oe(-zi2, ~12). Obviously, the highlight stripes move on object surface during the rotation. We therefore denote the hghlight point as H(X(B), Y@), Z(8)) and its image position as h(x(b), Orthogonal projection Axis Light Fig. 1 mage formation geometry of shape from rotation. As an object rotates, its surface points have their traces in the correspondmg EPs as sinusoidal curves of half visible period, even they are not distinct enough to be tracked. f the component of a surface normal in the rotation plane is discontinuous at a point, the shadings on /94 $ EEE 33 1
2 its both sides are different and its trace in EP appears as an edge (typically a segment of sinusoidal curve). f the albedo has a discontinuity at a point, the point also draws a sinusoidal trace of edge in the EP. These two kinds of points are called fixed points since, for such a point, multiple lines of sight through its projections in dfferent images cross at the same 3D position. / Rotational axis illumination direction mage fame / Fig. 2 One cross section of an object parallel to the rotation plane. 2.2 Qualitative Motion of Highlights Let us first qualitatively look at the motion types of hghlights according to shapes such as comer, convex, linear, concave, etc. Based on analysis of highlight's trajectories, we can qualitatively infer the shapes. f the surface normal and surface albedo are continuous at a point, matching its projections in continuous images is no longer possible as what has been done for fixed points [1,2,6]. An alternative way is to look at the shading. As an object rotates, its surface elements, in turn, face to the illuminant. A highlight, determined by the surface normal, shlft on surfaces and its trace is possible to be located with respect to the rotation angle. According to curvatures along object boundary on a horizontal cross section, shapes are categorized as either of convex corner, convex, linear, concave. We hence find an interesting effect that hghlights have some basic types of trajectories over the traces of surface points in the correspondmg Epipolar- Plane mage. Figure 3 shows hghlight moves on the surfaces and correspondlng trajectories over traces of surface points. On a convex shape, for example, the highlight moves relatively in the inverse direction of the rotation, which yields its image velocity lower than that of surface points (see its trace at right). A linear boundary, however, has its points face the light direction at the same time, which generates a horizontal stripe of highlights in the EP. Further, a concave shape has its highlight moving in the same direction of rotation; its image velocity is hgher than that of surface points. At a corner, hghlight does not appear if the corner is strictly a zero curvature. Combining dlfferent surfaces together corresponds to connecting their hghlight trajectories in EP. Figure 4 shows two types of shapes, for instance, whch are convexconcave-convex and comer-linear-convex combinations, respectively. n the first case, hghlight A moves on the first convex surface. n the meanwhile, point B with zero curvature (where shape changes from concave to convex) becomes a highlight point. t splits into two highlight points C and D that move on convex and concave shapes separately. Point D then merges with hghlight point A at another zero curvature point E and disappears. Thls splitting and merging process can be observed from trajectories of hghlights in the correspondmg EP. We can assert that trajectories of concave and convex shapes should have a smooth connection. The tangent of trajectory is horizontal at the connecting points. Ths is because splitting and merging points are zero curvature as a short linear segment which has a horizontal trajectory in EP. Similarly, we can qualitatively know the trajectory of highlight for the comer-linear-convex combination (Fig. 4(b)). t is a sinusoidal curve A followed by a horizontal segment AB, and then a trace of convex shape. At the corner point (an extreme case of convex shape), the hghlight stays constant. shape & highlight trace of highlight ir!p comer C=oO convex c>o plane c=o :oncave C d fixed $J o \ U \ % o \ 0 position of highlight 0 direction i~u"ation direction c) h trace of hiphdight s trace of surface points Fig. 3 Trajectories of highhghts over traces of surface points of Mferent shapes. As a result, a highlight shifts on object surfaces and passes all surface points at least one time if no serious 332
3 occlusion of light occurs. Generated from either fixed or shifting points on object surfaces, a queue of connected trajectories in EP acrosses trajectories of all surface points in one period of rotation. We hence attempt to compute positions of all surface points from this queue. Curvatures of shapes r>n -.. c=o Rotyc<o., Shins of highlights '. i Shapes c>o w- A Camera 0 k illumination a1 * x 3 B A Camera c] bl k illumination / * x equation from the camera geometry. ~(0) = x H = X(0) cos0 + Z(0) sin0 (4) Assuming the shape is not linear, the corresponding highlight trace is then not horizontal and we can take derivative of (4) with respect to 0 to obtain (9 x'(0) = Xe'cosO+Ze'sinO - Xsine +Zcose The first two terms express a possible shift of H on the surface and (X'e, Z'e) gwes the tangent direction of the boundary. n fact, the tangent direction at the hghlight point dlvides half the angle between camera viewing &ration and the drection of illumination in the rotational plane (incident angle equals to reflecting angle). Therefore, the surface normal dlrects to the angle 0+(p0/2-~/2 in the object centered coordinate system and we obtain equation (Xe', Ze') ( -sin(o+cpo/2), cos(0+cp0/2) )=O (6) From Eq. (4-6), two differential equations can be written as az To sin(e+~0/2) --cqs-+z ae 2 cos 0 f n \ ' ae ' a2 Fig 3 Connechon of hghhght traces accordmg to the comlnnahon of shapes (a) convex-concave-convex shape, (b) comer-plane-convex shape (1) hghhght movement, (2) hghhght traces in EF's 3 Shape Recovery Schemes 3.1 Shape Estimation from Fixed Points From the camera geometry (Fig. 2). we know the viewing &rection under orthogonal projection is v=[-sine, coso] in the object centered coordinate system. The image position of a fixed point can be written as x(0) = P x = X(0) cos0 + Z(0) sin0 (1) where x is the unit vector of the horizontal image axis. nfferentiating equation (1) with respect to 0 and using constraint of fixed point (X'e, Z'e) =(O, 0) we obtain the position of P as (2) 7 X = x cos0 - de sin0 Z = x sin0 + xtg cos0 (3) whch means the position can be estimated from x(0) and its tangent direction of trajectory at 0. For a fixed point, tlus computation can be carried out many times along its tracked trajectory. This is an over-constrained issue and fusion of multiple measurements to yield a more accurate result is possible; using either Kalman Filter or least square error method. The estimation not only gives positions at nonsmooth p'arts, but also serves ay boundary conditions for estimation of remaining parts. 3.2 Recovery of Smooth Specular Surfaces For a highlight point, we also have the following ae in the domains O+n/2, 31~12, and O*O, These equations have unique solutions as where and where Q( e) = ae cos ecos 2-; 2 J, respectively. (9) - (Do sin ecos A 2 (12) n the equations, (Xei, Zei) is a boundary condition for the 333
4 solution. We (9) and Q. (11) separately in the domains [-n/4, ~141, [n-n/4, n+n/4], and [n/2-n/4, n/2+n/4], [3n/2+n/4, 3n/2+~/4]. After Z(8) or X(8) is obtained, X(8) or Z(8) is given by equation (4). n order to improve accuracy of the solution, we take images of rotational objects at very small interval of 8. This will make the summation used in numeric calculation close to the integral in the formulae (9) and (11). f, on the other hand, a boundary is linear, multiple points become highlights at the same angle 8 so that taking derivative of x(0) with respect to 8 as done for Eiq. (5) is impossible. There are multiple values of x(8) along a horizontal line in EP. Nevertheless, a linear segment can be determined in an even simpler way. Suppose at least one points (Xi, Z;) on the line is known, any other points (X, Z) can be estimated from their image positions with the known point, i.e. X = COS(&C~()/~) (~(0) - xi(8))+ Xi Z = sin(&qi0/2) (x(8) - xi( )))+ Z; (13) since we know that the dmection of the line should be perpendicular to the drection in the middle of the camera direction and illuminating direction. 3.3 Boundary Conditions for the Solutions The boundary condition for the solutions of surface shapes using equations (9,11,13) are from positions of fixed points which are very accurate, as well as contours if there is no single fixed point available [l]. Trajectory of a comer point (discontinuity at surface normal) connects trajectories of hghlights. Whle the trajectory of a texture point (discontinuity at albedo) crosses hghlight trajectory without brealilng it as Fig. 5 depicts. At end points or crossing points of hghlight trace, 3D positions become known. From these known points, we can start estimation of smooth surface simultaneously by computing integrals along trajectories of hghlights in EPs. This process is done until a singular point with infinite x (8) (where the trace has horizontal tangent) is reached. We need to change the direction of integrals there for further propaganda of shapes. 4. Detectability of Feature Trajectories Let us examine how traces of features in the EPs can be extracted for shape recovery. Assume the diffused components of surface reflectance on both sides of a texture point are R1 and R2, and its component of the surface normal in the rotation plane is np(0). During the rotation period without hghlights, the difference of image intensities at both sides of the point is A@) = (R1-R2) S cos+@, L> = (Rl-Rz) Clcos<n(B), L> +C2 (14) where S is the intensity of the illumination and C1. C2 are constants. This means that a texture point is most ddnct for traclung when it is facing the light. We can use a normal edge detecting operator to locate the trajectory of a texture point. Assuming the horizontal components of two normals at a comer point are n 1(0) and na(8) and the albedo there is R, the difference of image intensities at the corner becomes A = RS(cos<n1(8), L> - costn2(e), L>)+c~ (15) where C3 is a constant. The variable in it is only the angle <nl(8),l>. f the comer is very sharp, A becomes zero at a particular angle of rotation where vector L divides half of the angle between nl(8) and n2(8). Th~s means the comer is undetectable and its trajectory is hard to follow when it faces the light. X trace of a corner Doint 8 from curved surface trace of a texture point, trace of a texture point x( e 3,from linear surface Fig. 5 Estimation of smooth shape startingfrom fixed points. The intensity of a surface point viewed by a camera is determined by normal direction and the illumination with respect to the camera. n order to follow the movement of a highlight, we filter the EP with an 1-D ridge-type filter and pick up peaks. We align the ridge-type filter both horizontally and vertically for detecting hghlight traces from non-linear surfaces and a linear segment. 5. Experimental Results We have done experiment on simulated scenes and real scenes. Figure 6 gives a simulation result whch has three types of shapes, their corresponding EPs and trajectories of surface points and highlights. Figure 7a shows another simulated EP from a cylinder; the light is from the same direction of the camera (rpo=o) and the rotation axis is at the center of the cylinder. The recovered shape using our method is shown in Fig. 7b. Similarly, Fig. 8a shows a hghlight trace generated from a shape which is recovered in Fig. 8b. For real objects, a bottle is put on the rotation table (Fig. 94. One of its EP is in Fig. 9b and a highlight trajectory is visible. We filter the EP to track the trajectory (Fig.9~). Figure 9d displays the shape on that rotation plane and the model of the bottle is recovered by connecting shapes on each rotation planes at different heights (Fig. 9e). For objects containing plane, a box and one of its EP is shown in Fig. loa. The light direction is about 60 degree from that of the camera. We can see that horizontal stripes of ~ghlights at four side planes in the EP; each has an interval of A63 = d2 with their neighbors. The 3D location of the box is computed by equation (3,13) and is shown in Fig. lob. 334
5 Fig. 6 A simulated result of surface point traces and highlight traces (bold curve) when objects are ro&ting. (a) three lands of shapes containg convex, concave and linear shapes (surface point are with * sign). The camera is the same as the Z axis, (b) EPs. Both specular reflectance and diffused reflectance exist. The light direction (p0=60. (c) traces of surface points and hlghhghts. Fig. 7 An ideal cylinder for shape recovery. (a) EP with a trace of fixed point (dark curve), (b) recovered shape at the rotation plane. Fig. Sa. A simulated EP whose shape is in Fig. 8b 335
6 6. Conclusion n ths paper, we described a new method for qualitative identification and quantitative recovery of shapes with specular reflectance by controlling object rotation and detecting hghlights. We are workmg on various shapes and improving accuracy of the method. We will investigate surface without specular reflectance and generalize the method. References [l] J.Y. Zheng, "Acquiring 3D models from sequences of contours", EEE PAM, Vo1.16, No.2, Feb pp , J. Y. Zheng, and F. Kishino, "Verifying and combining different visual cues into a complete 3D model", CVPR92,. pp ,1992. [3] R. Szelisla, "Shape from rotation", CVFW1, pp ,1991. [4] C. Tomasi and T. Kanade, "Shape without depth", 3rd CCV, pp.91-95, [5] A. Black, G. Brelstaff, "Geometry from specularities", 2th CCV, pp [6] P. K. Horn, "Shape from shading'' The MT Press [7] H. Baker, and R. Bolles, "Generabang eppolar-plane image analysis on the spatiotemporal surface", CVFR-88, pp.2-9, (b> Fig. 8. A simulated EP with the recovered shape. Fig. 9 (a) Fig9 (b) \ OOlliDO w"-.mdo" w Y""XCEln* %vnvlsm3m. n (9 a 43 Fig 9 Modehng a real object with smooth surfaces (a) A bottle on the rotatlon plane, a dark he is on the surface for the boundary condhon of the shape eshtnatlon (b) One EP of the bottle (c) Extracted hgbhght trace (d) The shape at the cross sectlon of the bottle (e) 3D model constructed by connechng shapes at all rotatlon plane 10 Fig. Recovering hear shape of a box. (a) Objects, (b) EP at the height of the box, (c) estimated cross section of the box 336
Constructing a 3D Object Model from Multiple Visual Features
Constructing a 3D Object Model from Multiple Visual Features Jiang Yu Zheng Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology Iizuka, Fukuoka 820, Japan Abstract This work
More informationof 3D Models from Specular Motion Using Circular Lights
Reconstruction of 3D Models from Specular Motion Using Circular Lights Jiang Yu Zheng, Akio Murata, Yoshihiro Fukagawa, and1 Norihiro Abe Faculty of Computer Science and Systems Engineering Kyushu Institute
More informationComputing 3 odels of Rotating Objects from Moving Shading
Computing 3 odels of Rotating Objects from Moving Shading Jiang Yu Zheng, Hiroshi Kakinoki, Kazuaki Tanaka and Norihiro Abe Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology
More informationShape and Model from Specular Motion
Shape and Model from Specular Motion Jiang Y u Zheng, Yoshihiro Fukagawa and Norihiro Abe Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology, Iizuka, Fukuoka 820, Japan
More informationAcquiring a Complete 3D Model from Specular Motion under the Illumination of Circular-Shaped Light Sources
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO. 8, AUGUST 000 913 Acquiring a Complete 3D Model from Specular Motion under the Illumination of Circular-Shaped Light Sources Jiang
More information3D Models from Contours: Further Identification of Unexposed Areas
3D Models from Contours: Further Identification of Unexposed Areas Jiang Yu Zheng and Fumio Kishino ATR Communication Systems Research Laboratory 2-2 Hikaridai, Seika, Soraku, Kyoto 619-02, Japan Abstract
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More informationGenerating Dynamic Projection Images for Scene Representation and Understanding
COMPUTER VISION AND IMAGE UNDERSTANDING Vol. 72, No. 3, December, pp. 237 256, 1998 ARTICLE NO. IV980678 Generating Dynamic Projection Images for Scene Representation and Understanding Jiang Yu Zheng and
More informationLast update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1
Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus
More informationAcquiring 3-D Models from Sequences of Contours
IEEE TRANSACTIONS ON PATI ERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 16, NO. 2, FEBRUARY 1994 163 Acquiring 3-D Models from Sequences of Contours Jiang Yu Zheng Abshulcf-This paper explores shape from
More informationMotion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)
Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationUsing surface markings to enhance accuracy and stability of object perception in graphic displays
Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationThe Law of Reflection
If the surface off which the light is reflected is smooth, then the light undergoes specular reflection (parallel rays will all be reflected in the same directions). If, on the other hand, the surface
More informationLecture 16: Computer Vision
CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationPerspective and vanishing points
Last lecture when I discussed defocus blur and disparities, I said very little about neural computation. Instead I discussed how blur and disparity are related to each other and to depth in particular,
More informationLight source estimation using feature points from specular highlights and cast shadows
Vol. 11(13), pp. 168-177, 16 July, 2016 DOI: 10.5897/IJPS2015.4274 Article Number: F492B6D59616 ISSN 1992-1950 Copyright 2016 Author(s) retain the copyright of this article http://www.academicjournals.org/ijps
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationOther approaches to obtaining 3D structure
Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera
More informationLecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19
Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line
More informationform are graphed in Cartesian coordinates, and are graphed in Cartesian coordinates.
Plot 3D Introduction Plot 3D graphs objects in three dimensions. It has five basic modes: 1. Cartesian mode, where surfaces defined by equations of the form are graphed in Cartesian coordinates, 2. cylindrical
More informationThe Role of Light to Sight
Reflection The Role of Light to Sight The visual ability of humans and other animals is the result of the complex interaction of light, eyes and brain. Absence of Light Darkness. Luminous objects are objects
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationComputer Graphics. Shading. Based on slides by Dianna Xu, Bryn Mawr College
Computer Graphics Shading Based on slides by Dianna Xu, Bryn Mawr College Image Synthesis and Shading Perception of 3D Objects Displays almost always 2 dimensional. Depth cues needed to restore the third
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationMassachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II
Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior
More informationGeneral Principles of 3D Image Analysis
General Principles of 3D Image Analysis high-level interpretations objects scene elements Extraction of 3D information from an image (sequence) is important for - vision in general (= scene reconstruction)
More informationLight: Geometric Optics
Light: Geometric Optics 23.1 The Ray Model of Light Light very often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization,
More informationOptics II. Reflection and Mirrors
Optics II Reflection and Mirrors Geometric Optics Using a Ray Approximation Light travels in a straight-line path in a homogeneous medium until it encounters a boundary between two different media The
More informationPHY 171 Lecture 6 (January 18, 2012)
PHY 171 Lecture 6 (January 18, 2012) Light Throughout most of the next 2 weeks, we will be concerned with the wave properties of light, and phenomena based on them (interference & diffraction). Light also
More informationHigh Accuracy Depth Measurement using Multi-view Stereo
High Accuracy Depth Measurement using Multi-view Stereo Trina D. Russ and Anthony P. Reeves School of Electrical Engineering Cornell University Ithaca, New York 14850 tdr3@cornell.edu Abstract A novel
More information3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO., 1 3D Shape Recovery of Smooth Surfaces: Dropping the Fixed Viewpoint Assumption Yael Moses Member, IEEE and Ilan Shimshoni Member,
More informationCam makes a higher kinematic pair with follower. Cam mechanisms are widely used because with them, different types of motion can be possible.
CAM MECHANISMS Cam makes a higher kinematic pair with follower. Cam mechanisms are widely used because with them, different types of motion can be possible. Cams can provide unusual and irregular motions
More informationAUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER
AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal
More informationGround Plane Motion Parameter Estimation For Non Circular Paths
Ground Plane Motion Parameter Estimation For Non Circular Paths G.J.Ellwood Y.Zheng S.A.Billings Department of Automatic Control and Systems Engineering University of Sheffield, Sheffield, UK J.E.W.Mayhew
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More information3D Object Model Acquisition from Silhouettes
4th International Symposium on Computing and Multimedia Studies 1 3D Object Model Acquisition from Silhouettes Masaaki Iiyama Koh Kakusho Michihiko Minoh Academic Center for Computing and Media Studies
More informationCPSC 425: Computer Vision
1 / 49 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 49 Menu March 10, 2016 Topics: Motion
More informationComp 410/510 Computer Graphics. Spring Shading
Comp 410/510 Computer Graphics Spring 2017 Shading Why we need shading Suppose we build a model of a sphere using many polygons and then color it using a fixed color. We get something like But we rather
More informationFinally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field
Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik
More informationThe Reflection of Light
King Saud University College of Applied Studies and Community Service Department of Natural Sciences The Reflection of Light General Physics II PHYS 111 Nouf Alkathran nalkathran@ksu.edu.sa Outline Introduction
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationCOMP 546. Lecture 11. Shape from X: perspective, texture, shading. Thurs. Feb. 15, 2018
COMP 546 Lecture 11 Shape from X: perspective, texture, shading Thurs. Feb. 15, 2018 1 Level of Analysis in Perception high - behavior: what is the task? what problem is being solved? - brain areas and
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationHow to Compute the Pose of an Object without a Direct View?
How to Compute the Pose of an Object without a Direct View? Peter Sturm and Thomas Bonfort INRIA Rhône-Alpes, 38330 Montbonnot St Martin, France {Peter.Sturm, Thomas.Bonfort}@inrialpes.fr Abstract. We
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationReflection & Mirrors
Reflection & Mirrors Geometric Optics Using a Ray Approximation Light travels in a straight-line path in a homogeneous medium until it encounters a boundary between two different media A ray of light is
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationLight: Geometric Optics (Chapter 23)
Light: Geometric Optics (Chapter 23) Units of Chapter 23 The Ray Model of Light Reflection; Image Formed by a Plane Mirror Formation of Images by Spherical Index of Refraction Refraction: Snell s Law 1
More informationRectification of distorted elemental image array using four markers in three-dimensional integral imaging
Rectification of distorted elemental image array using four markers in three-dimensional integral imaging Hyeonah Jeong 1 and Hoon Yoo 2 * 1 Department of Computer Science, SangMyung University, Korea.
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More information1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)
Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The
More informationLecture Notes (Reflection & Mirrors)
Lecture Notes (Reflection & Mirrors) Intro: - plane mirrors are flat, smooth surfaces from which light is reflected by regular reflection - light rays are reflected with equal angles of incidence and reflection
More informationJim Lambers MAT 169 Fall Semester Lecture 33 Notes
Jim Lambers MAT 169 Fall Semester 2009-10 Lecture 33 Notes These notes correspond to Section 9.3 in the text. Polar Coordinates Throughout this course, we have denoted a point in the plane by an ordered
More informationOcclusion Detection of Real Objects using Contour Based Stereo Matching
Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,
More informationEdge and corner detection
Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More information1. What is the law of reflection?
Name: Skill Sheet 7.A The Law of Reflection The law of reflection works perfectly with light and the smooth surface of a mirror. However, you can apply this law to other situations. For example, how would
More informationPerceptual Grouping from Motion Cues Using Tensor Voting
Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project
More informationHomework #2. Shading, Projections, Texture Mapping, Ray Tracing, and Bezier Curves
Computer Graphics Instructor: Brian Curless CSEP 557 Autumn 2016 Homework #2 Shading, Projections, Texture Mapping, Ray Tracing, and Bezier Curves Assigned: Wednesday, Nov 16 th Due: Wednesday, Nov 30
More informationOptics. a- Before the beginning of the nineteenth century, light was considered to be a stream of particles.
Optics 1- Light Nature: a- Before the beginning of the nineteenth century, light was considered to be a stream of particles. The particles were either emitted by the object being viewed or emanated from
More informationThree-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras
Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.
More informationOther Reconstruction Techniques
Other Reconstruction Techniques Ruigang Yang CS 684 CS 684 Spring 2004 1 Taxonomy of Range Sensing From Brain Curless, SIGGRAPH 00 Lecture notes CS 684 Spring 2004 2 Taxonomy of Range Scanning (cont.)
More informationTime-to-Contact from Image Intensity
Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationIn this chapter, we will investigate what have become the standard applications of the integral:
Chapter 8 Overview: Applications of Integrals Calculus, like most mathematical fields, began with trying to solve everyday problems. The theory and operations were formalized later. As early as 70 BC,
More informationLecture 24: More on Reflectance CAP 5415
Lecture 24: More on Reflectance CAP 5415 Recovering Shape We ve talked about photometric stereo, where we assumed that a surface was diffuse Could calculate surface normals and albedo What if the surface
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationTopics and things to know about them:
Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationMATH 31A HOMEWORK 9 (DUE 12/6) PARTS (A) AND (B) SECTION 5.4. f(x) = x + 1 x 2 + 9, F (7) = 0
FROM ROGAWSKI S CALCULUS (2ND ED.) SECTION 5.4 18.) Express the antiderivative F (x) of f(x) satisfying the given initial condition as an integral. f(x) = x + 1 x 2 + 9, F (7) = 28.) Find G (1), where
More informationCS 4620 Midterm, March 21, 2017
CS 460 Midterm, March 1, 017 This 90-minute exam has 4 questions worth a total of 100 points. Use the back of the pages if you need more space. Academic Integrity is expected of all students of Cornell
More informationEE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline
1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation
More informationChapter 6 Some Applications of the Integral
Chapter 6 Some Applications of the Integral More on Area More on Area Integrating the vertical separation gives Riemann Sums of the form More on Area Example Find the area A of the set shaded in Figure
More informationThree-Dimensional Computer Vision
\bshiaki Shirai Three-Dimensional Computer Vision With 313 Figures ' Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Table of Contents 1 Introduction 1 1.1 Three-Dimensional Computer Vision
More informationIntroduction to 3D Imaging: Perceiving 3D from 2D Images
Introduction to 3D Imaging: Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation that stores
More informationDiscover how to solve this problem in this chapter.
A 2 cm tall object is 12 cm in front of a spherical mirror. A 1.2 cm tall erect image is then obtained. What kind of mirror is used (concave, plane or convex) and what is its focal length? www.totalsafes.co.uk/interior-convex-mirror-900mm.html
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationChapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.
Chapter 32 Light: Reflection and Refraction Units of Chapter 32 The Ray Model of Light Reflection; Image Formation by a Plane Mirror Formation of Images by Spherical Mirrors Index of Refraction Refraction:
More informationCS 787: Assignment 4, Stereo Vision: Block Matching and Dynamic Programming Due: 12:00noon, Fri. Mar. 30, 2007.
CS 787: Assignment 4, Stereo Vision: Block Matching and Dynamic Programming Due: 12:00noon, Fri. Mar. 30, 2007. In this assignment you will implement and test some simple stereo algorithms discussed in
More informationGeometrical Optics INTRODUCTION. Wave Fronts and Rays
Geometrical Optics INTRODUCTION In this experiment, the optical characteristics of mirrors, lenses, and prisms will be studied based on using the following physics definitions and relationships plus simple
More informationPARAMETRIC EQUATIONS AND POLAR COORDINATES
10 PARAMETRIC EQUATIONS AND POLAR COORDINATES PARAMETRIC EQUATIONS & POLAR COORDINATES A coordinate system represents a point in the plane by an ordered pair of numbers called coordinates. PARAMETRIC EQUATIONS
More informationPipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11
Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION
More informationAppendix 1: Manual for Fovea Software
1 Appendix 1: Manual for Fovea Software Fovea is a software to calculate foveal width and depth by detecting local maxima and minima from fovea images in order to estimate foveal depth and width. This
More informationcoding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight
Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationComputer Vision Lecture 20
Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow
More informationStochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen
Stochastic Road Shape Estimation, B. Southall & C. Taylor Review by: Christopher Rasmussen September 26, 2002 Announcements Readings for next Tuesday: Chapter 14-14.4, 22-22.5 in Forsyth & Ponce Main Contributions
More information