ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino
|
|
- Elfreda Peters
- 5 years ago
- Views:
Transcription
1 ROBOTICS 01PEEQW DAUIN Politecnico di Torino
2 Mobile & Service Robotics Sensors for Robotics 4
3 Vision Vision is the most important sense in humans and is becoming important also in robotics not expensive rich of information Vision includes three steps Data recording and transformation in the retina Data transmission through the optical nerves Data elaboration by the brain 3
4 Vision sensors CCD (Coupled Charge Device, light-sensitive, discharging capacitors) CMOS (Complementary Metal Oxide Semiconductor technology) 4
5 CCD sensors A CCDsensor consists in a range of capacitors, each accumulating a charge proportional to the quantity of light that is hitting it. The charge in each capacitor is then turned into a numerical value by the camera inner system to produce a picture. Advantages/Features of CCD Sensors Conversion takes place in the chip without distortion CCDs have very high uniformity Good for HD quality images (not videos) These sensors are more sensitive Produce Better Images in Low Light CCD sensors produce cleaner and less grainy Images Low-noise images CCD sensors has been produced for a longer period of time Disadvantages of CCD Sensors CCD sensors consume much more power CCDs are interlaced Inferior HDvideos Less pixel rates CCDs are expensive as they require special manufacturing 5
6 CMOS sensors CMOS sensors can directly make the charge conversion on the generation photosite thanks to their pixel amplifier. Severaltransistors at each pixel amplify and move the charge using more traditional wires. Charge isthen turned into a numericalcal value which correspond to the image. This characteristic give them the ability to avoid several transfers and to increase the processing speed CMOS do not require any specialmanufacturing. Mostof the digital cameras these days use CMOSSensor as itreduces cost Advantages/ Features Of CMOS Sensor CMOS consumes less power (100 times less than CCD) CMOS sensors are cheaper Each pixel can be individually addressed. High reading rate They produce better HD videos Disadvantages Of CMOS Sensor CMOS sensors are also more susceptible sometimes images are grainy CMOS sensors need more light for better image Despite some disadvantages CMOSsensor is widely used in Mobile Phones, Tablets, PDAs and on most of the digital cameras. CCD cameras produce better images but CMOS sensors are catching up fast with its low power consumption. 6
7 Artificial vision issues Projection from a 3D world on a 2D plane: perspective projection (transformation matrices) Discretization effects due to pixels (CCD or CMOS) Misalignment errors (hardware) Parallel lines Converging lines Pixel discretization 7
8 Camera models Pinhole camera (aka perspective camera) 8
9 Pinhole camera image planes A hole diameter point images the image is reversed B point images Decreasing the image plane distance or the hole diameter makes the point images sharper Increasing the hole diameter makes the point images brighter Infinite depth-of-field Infinite depth-of-focus 9
10 Camera models Thin lens camera: the lenshas a thickness d that is negligible compared to the radii of curvatureof the lens surfaces R 1, R 2 Rays are refracted as they go through the lens (refraction index n) Thin lens equation ( n 1) ; R i > 0 if convex f R R
11 Thin lens camera Thin lens camera is reversible Rays parallel to the optical axis pass through the focus and viceversa Rays through the lens center are not refracted There are two symmetrical foci True lens shows aberration phenomena lens center optical axis f f 11
12 Aberration Spherical 12
13 Image formation Thin lens approximation Pinhole camera Principal image plane π Reversed image plane Optical axis π F π 3D object Focal Plane 13
14 Image formation and equations Lens equation ( p f) f = pq = f( p + q) + = f ( q f) p q f real object field of view angle focal plane image plane p object distance f f focal distance q image distance q pf = if p f then ( p f) q f the image plane is approx in the focal plane 14
15 Image formation P P i p x p= p y p z x i p = i y i k c i c C c f P i x i p x π F π P p z 15
16 Transformations Coordinate transformation between the world frame and the camera frame Projection of 3D point coordinates onto 2D image plane coordinates Coordinate transformation between possible choices of image coordinate frame 16
17 Transformations c c R t 0 0 = 1 T 0 T R 0 World frame c 0 R c Camera frame i T pix T c i = f pix R i R π Rescaling Optical correction Image plane 17
18 Reference frames f World frame R 0 p 0 i c p c P i p i i i k c C c R i Optical axis P p c R c Focal plane j c π F u R pix O i p pix v π j i Image plane translation translation+scale R R R c i pix 18
19 Vector notation in 3D R R R p = x y x p = x y x p = x y x c c c c c c c c c c T T T in 2D R R i i i i pix p = x y T p = u v pix T in pixel units 19
20 Camera projections Perspective projection Orthographic projection p x p = x = f if p const x =αp z i x p f x i x i p z z large compared to the distance from the camera small compared to the distance from the camera The pixel height of similar subject is different if the distance from the camera varies a lot. On the left the persons have different pixel height while on the right they have approximately similar heights, since their distance from the camera is high and does not vary much 20
21 Projections 21
22 Perspective projection i c p z P i k c C c x i C i All points give the same image P p x P P i π π π F f f p x p = p = f p f x x i x z z i Usually the negative sign is avoided considering the reversed image plane 22
23 Perspective projection P p x p = p c y p z p i px f p x z i p f y P p = f = y c p i z perspective projection pz f p z f f 0 0 i p = p p p p = = 0 f 0 p Pp z i c z i c c c arbitrary positive constant λ f 0 0 f i p = 0 f 0 p = = 0 f p Pp i c c c c 23
24 Perspective projection Homogeneous coordinates pɶ = p p p 1 c x y z T pɶ = x y 1 i i i T Homogeneous perspective/projection matrix f x i p = p i z i p y p = z i p x f x i p y i i c pɶ = z i z i = = = c c c 0 0 p Πpɶ ΠTpɶ z p p y f λp = ΠTpɶ i i c c 0 0 f x i y i 24
25 Perspective projection this is the ideal case Canonical projection matrix ΠT i c c 0 f c c f R t = T 25
26 Perspective projection PP is studied by projective geometry PP preserves linearity; lines in 3D correspond to lines in 2D and viceversa PP does not preserve parallelism The intersection points in 2D of parallel lines in 3D define vanishing points (points infinitely far away) 26
27 Camera parameters Intrinsic parameters: the parameters that link the pixel coordinates of an image point to the corresponding (metric) coordinates in the camera reference frames Extrinsic parameters: the parameter that define the location and orientation of the camera reference frame with respect to a known world reference frame T W c W W R t c c = 1 6 parameters 0 T Camera calibration: the procedure to estimate these parameters 27
28 Camera intrinsic parameters s y R pix ncolumns s x c pix u i pix pix sx, sy in m s s x y aspect ratio mrows v R i i i uv, in pixel units j pix j i ( u= 6, v= 8) c pix o = x oy camera center in pixel units 28
29 Camera intrinsic parameters Focal length f Transformation between pixel coordinates and camera coordinates Geometric distortion introduced by the optical lens systems 29
30 Camera intrinsic parameters Transformation between pixel coordinates and camera coordinates (scaling + translation) in homogeneous coordinates = x i+ x = y i + y u sx o v sy o x i sx 0 so x px x pi y i = 0 sy so y py y pi pɶ i 30
31 Lens distorsion Types Pincushion distortion Radial distortion Barrel distortion Non radial distortion (tangential) Radial distortion is modelled by a function D(r) that affects each point vin the projected plane relative to the principal point p, where D(r) is normally a non-linear scalar function and pis close to the midpoint of the projected image. Barrel projections are characterized by a positive gradient of the distortion function, whereas pincushion by a negative gradient v = Dv ( pv ) + p d 31
32 Lens distortion Radial distortion is approximated by i i id id ( kr kr kr ) x = x ( kr kr kr ) y= y where xid, yid are the coordinates of the distorted image points r = x + y id id is the square of the distance from the camera image center k1, k2, k3 are intrinsic parameters 32
33 Other image sensor errors Errors are due to the imperfect orthogonalityof pixel elements in CCD or CMOS sensors 33
34 Optical sensors Optical distance sensors Depth from focus Stereo vision ToF cameras Motion and optical flow 34
35 Depth from focus The method consists in measuring the distance of objects in a scene evaluating from two or more images the focal length adjustment necessary to bring objects in focus Short distance focused Medium distance focused Far distance focused 35
36 Depth from focus 36
37 Depth from focus Near focusing Far focusing 37
38 Depth from focus = + f f D e D L D (, x y, z) image plane ( x, y) i i focal plane δ e bx ( ) L( d+ e D )1 1 1 = 2 f ( d+ e) sx ( ) blur radius shape 38
39 Stereo Cameras 39
40 Stereo disparity ( x, y, z) left lens f z x right lens image plane ( x, y) l l ( x, y) r r b baseline (known) 40
41 Stereo disparity 41
42 Stereo disparity Idealized camera geometry for stereo vision x /2 x l x+ b r x b/2 =, = f z f z x x l r b = f z ( x + x) /2 l r x= b x x l r ( y + y) /2 l r y= b y y l r f z= b x x l r Disparity between two images Depth computation 42
43 Stereo vision Distance is inversely proportional to disparity closer objects can be measured more accurately Disparity is proportional to baseline For a given disparity error, the accuracy of the depth estimate increases with increasing baseline b However, as bis increased, some objects may appear in one camera, but not in the other A point visible from both cameras produces a conjugate pair Conjugate pairs lie on epipolar line(parallel to the x-axis for the arrangement in the figure above) 43
44 Stereo vision 44
45 Stereo points correspondence These two points are corresponding: how do you find them in the two images? Left image Right image Right Left Disparity 45
46 Epipolar lines P corresponding points stay on the epipolar lines π τ τ 1 2 q 1 q 2 l 1 l 2 C 1 epipolar lines e 1 Rt, e 2 these two points are known and fixed (they are called epipoles) C 2 46
47 Stereo vision Depth calculation The key problem in stereo vision is how to optimally solve the correspondence problem Corresponding points lie on the epipolar lines Gray-Level Matching Match gray-level features on corresponding epipolar lines Zero-crossing of Laplacian of Gaussians is a widely used approach for identifying the same feature in the left and right images Brightness = image irradiance or intensity I(x,y) is computed and used as shown below 47
48 Depth images 48
49 Time of Flight (ToF) Cameras Range imaging systems based on the known speed of light They measure the time-of-flight, i.e., the time from the emission to the return of the signal The measurement is performed for each point of the image (different from Lidars) The distance resolution is 1 cm (larger than Lidars ) The simplest version of a time-of-flight camera useslight pulses 49
50 Time of Flight (ToF) Cameras Modulated light is emitted by a transmitter. This light is reflected by the object to be detected. The returning light is sampled by an on-chip photosensitive TOF CCD array. The receiver compares the phase difference between the emitted and the received light and computes the time difference of the "Time-of-Flight" individually per pixel. This value multiplied by the speed of light (ca. 300'000km/sec) and divided by 2 corresponds directly linearly to the distance. 50
51 Time of Flight (ToF) Cameras The single pixelconsists of a photo diode that converts the incoming light into a current In analog solutions fast switches are connected to the photo diode, which sends the current to one of two memory elements (capacitors) that act as summation elements In digital solutions a time counter, running at several gigahertz, is connected to each pixel and stops counting when light is sensed 51
52 Time of Flight (ToF) Cameras Pros Simplicity In contrast tostereo vision ortriangulation systems, the whole system is very compact: the illumination is placed just next to the lens, whereas the other systems need a certain minimum base line. In contrast tolaser scanning systems, no mechanical moving parts are needed Efficient distance algorithm It is very easy to extract the distance information out of the output signals of the TOF sensor, therefore this task uses only a small amount of processing power, again in contrast to stereo vision, where complex correlation algorithms have to be implemented. After the distance data has been extracted, object detection, for example, is also easy to carry out because the algorithms are not disturbed by patterns on the object Speed Time-of-flight cameras are able to measure the distances within a complete scene with one shot. As the cameras reach up to 160 frames per second, they are ideally suited to be used in real-time applications 52
53 Time of Flight (ToF) Cameras Cons Background light When using CMOS or other integrating detectors or sensors that use visible or near visible light (400nm -700nm), although most of the background light coming from artificial lighting or the sun is suppressed, the pixel still has to provide a highdynamic range. The background light also generates electrons, which have to be stored. For example, the illumination units in many of today's TOF cameras can provide an illumination level of about 1 watt. The Sun has an illumination power of about 50 watts per square meter after the optical band-pass filter. Therefore, if the illuminated scene has a size of 1 square meter, the light from the sun is 50 times stronger than the modulated signal. For non-integrating TOF sensors that do not integrate light over time and are using near-infrared detectors (InGaAs) to capture the short laser pulse, direct viewing of the sun is a non-issue. Such TOF sensors are used in space applications and in consideration for automotive applications 53
54 Time of Flight (ToF) Cameras Cons Interference In certain types of TOF devices, if several time-of-flight cameras are running at the same time, the TOF cameras may disturb each other's measurements. Multiple reflections In contrast to laser scanning systems, where only a single point is illuminated at once, the time-of-flight cameras illuminate a whole scene. On a phase difference device, due to multiple reflections, the light may reach the objects along several paths and therefore, the measured distance may be greater than the true distance. Direct TOF imagers are vulnerable if the light is reflecting from a specular surface. There are published papers that outline the strengths and weaknesses of the various TOF devices and approaches 54
55 Optical flow Optical flow is the pattern of apparent motion of objects, surfaces, and edges in successive scenes caused by the relative motion between the camera and the scene Optical flow techniques motion detection object segmentation time-to-collision motion compensated encoding stereo disparity measurement 55
56 Optical flow 56
57 Optical flow The optical flow methods try to calculate the motion between two image frames which are taken at times t and t + δtat every voxelposition. These methods are called differential since they are based on local Taylor series approximations of the image signal, i.e., they use partial derivatives with respect to the spatial and temporal coordinates I(, x yt,) = I( x+ δx, y+ δyt, + δt) I I I = I(, x yt,) + δx+ δy+ δt+... x y t I I I δx+ δy+ δt= x y t 0 Assuming the movement to be small Avoxel(volume+pixel) is a volume element representing a value on aregular gridin3d space 57
58 Optical flow I I I V + V + = x y x y t T I V= I t 0 This problem is known as the aperture problemof the optical flow algorithms There is only one equation in two unknownsand therefore cannot be solved To find the optical flow another set of equations is needed, given by some additional constraint. All optical flow methods introduce additional conditions for estimating the actual flow. 58
59 Optical flow Lucas Kanade Optical Flow Method A two-frame differential methods for motion estimation The additional constraints needed for the estimation of the flow are introduced in this method by assuming that the flow V, V is constant x y in a small window of size with m> 1 which is centered at pixel(, x y) 2 Numbering the pixels as 1,,n= m a set of equations can be found I V + I V = I x1 x y1 y t I I I 1 x1 y1 t1 I V + I V = I I I V I x2 x y2 y t 2 x2 y2 x t2 = V y I V + I V = I I I I xn x yn y t xn yn t n n T 1 T Ax= b x= ( AA) Ab 59
Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationBasilio Bona DAUIN Politecnico di Torino
ROBOTICA 03CFIOR DAUIN Politecnico di Torino Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationCOSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor
COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The
More informationCameras and Stereo CSE 455. Linda Shapiro
Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationDD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication
DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:
More informationComputer Vision Projective Geometry and Calibration. Pinhole cameras
Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole
More informationCamera model and multiple view geometry
Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationCameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models
Cameras and Radiometry Last lecture in a nutshell CSE 252A Lecture 5 Conversion Euclidean -> Homogenous -> Euclidean In 2-D Euclidean -> Homogenous: (x, y) -> k (x,y,1) Homogenous -> Euclidean: (x, y,
More informationRobotics - Projective Geometry and Camera model. Marcello Restelli
Robotics - Projective Geometr and Camera model Marcello Restelli marcello.restelli@polimi.it Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano Ma 2013 Inspired from Matteo
More information1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)
Mierm Exam CS223b Stanford CS223b Computer Vision, Winter 2004 Feb. 18, 2004 Full Name: Email: This exam has 7 pages. Make sure your exam is not missing any sheets, and write your name on every page. The
More informationIntroduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation
Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationDepth Camera for Mobile Devices
Depth Camera for Mobile Devices Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Stereo Cameras Structured Light Cameras Time of Flight (ToF) Camera Inferring 3D Points Given we have
More informationChapter 36. Image Formation
Chapter 36 Image Formation Apr 22, 2012 Light from distant things We learn about a distant thing from the light it generates or redirects. The lenses in our eyes create images of objects our brains can
More informationRigid Body Motion and Image Formation. Jana Kosecka, CS 482
Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3
More informationOptics II. Reflection and Mirrors
Optics II Reflection and Mirrors Geometric Optics Using a Ray Approximation Light travels in a straight-line path in a homogeneous medium until it encounters a boundary between two different media The
More informationHomogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.
Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationComputer Vision cmput 428/615
Computer Vision cmput 428/615 Basic 2D and 3D geometry and Camera models Martin Jagersand The equation of projection Intuitively: How do we develop a consistent mathematical framework for projection calculations?
More informationECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD
ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2
More informationOutline. ETN-FPI Training School on Plenoptic Sensing
Outline Introduction Part I: Basics of Mathematical Optimization Linear Least Squares Nonlinear Optimization Part II: Basics of Computer Vision Camera Model Multi-Camera Model Multi-Camera Calibration
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 5: Projection Reading: Szeliski 2.1 Projection Reading: Szeliski 2.1 Projection Müller Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html Modeling
More informationChapter 23. Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian
Chapter 23 Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian Reflection and Refraction at a Plane Surface The light radiate from a point object in all directions The light reflected from a plane
More informationcalibrated coordinates Linear transformation pixel coordinates
1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial
More informationUnderstanding Variability
Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion
More informationCS4670: Computer Vision
CS467: Computer Vision Noah Snavely Lecture 13: Projection, Part 2 Perspective study of a vase by Paolo Uccello Szeliski 2.1.3-2.1.6 Reading Announcements Project 2a due Friday, 8:59pm Project 2b out Friday
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More informationReflectance & Lighting
Reflectance & Lighting Computer Vision I CSE5A Lecture 6 Last lecture in a nutshell Need for lenses (blur from pinhole) Thin lens equation Distortion and aberrations Vignetting CS5A, Winter 007 Computer
More informationToday. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography
Computational Photography Matthias Zwicker University of Bern Fall 2009 Today From 2D to 3D using multiple views Introduction Geometry of two views Stereo matching Other applications Multiview geometry
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More informationCamera Geometry II. COS 429 Princeton University
Camera Geometry II COS 429 Princeton University Outline Projective geometry Vanishing points Application: camera calibration Application: single-view metrology Epipolar geometry Application: stereo correspondence
More informationChapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.
Chapter 32 Light: Reflection and Refraction Units of Chapter 32 The Ray Model of Light Reflection; Image Formation by a Plane Mirror Formation of Images by Spherical Mirrors Index of Refraction Refraction:
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationRepresenting the World
Table of Contents Representing the World...1 Sensory Transducers...1 The Lateral Geniculate Nucleus (LGN)... 2 Areas V1 to V5 the Visual Cortex... 2 Computer Vision... 3 Intensity Images... 3 Image Focusing...
More information3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,
3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationCSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.
University of Texas at Arlington CSE 4392/5369 Introduction to Vision Sensing Dr. Gian Luca Mariottini, Ph.D. Department of Computer Science and Engineering University of Texas at Arlington WEB : http://ranger.uta.edu/~gianluca
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationPinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois
Pinhole Camera Model /5/7 Computational Photography Derek Hoiem, University of Illinois Next classes: Single-view Geometry How tall is this woman? How high is the camera? What is the camera rotation? What
More information3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,
3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective
More informationEpipolar geometry contd.
Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives
More informationTime-of-flight basics
Contents 1. Introduction... 2 2. Glossary of Terms... 3 3. Recovering phase from cross-correlation... 4 4. Time-of-flight operating principle: the lock-in amplifier... 6 5. The time-of-flight sensor pixel...
More informationThree-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras
Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationEXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003,
Numerical Analysis and Computer Science, KTH Danica Kragic EXAM SOLUTIONS Computer Vision Course 2D1420 Thursday, 11 th of march 2003, 8.00 13.00 Exercise 1 (5*2=10 credits) Answer at most 5 of the following
More informationChapter 23. Geometrical Optics: Mirrors and Lenses and other Instruments
Chapter 23 Geometrical Optics: Mirrors and Lenses and other Instruments HITT1 A small underwater pool light is 1 m below the surface of a swimming pool. What is the radius of the circle of light on the
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More informationCorrespondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong
More informationAugmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004
Augmented Reality II - Camera Calibration - Gudrun Klinker May, 24 Literature Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2. (Section 5,
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationComputer Vision Project-1
University of Utah, School Of Computing Computer Vision Project- Singla, Sumedha sumedha.singla@utah.edu (00877456 February, 205 Theoretical Problems. Pinhole Camera (a A straight line in the world space
More informationImage formation. Thanks to Peter Corke and Chuck Dyer for the use of some slides
Image formation Thanks to Peter Corke and Chuck Dyer for the use of some slides Image Formation Vision infers world properties form images. How do images depend on these properties? Two key elements Geometry
More informationComputer Vision Projective Geometry and Calibration. Pinhole cameras
Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole
More informationAll human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.
All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. - Aristotle University of Texas at Arlington Introduction
More informationChapter 2 - Fundamentals. Comunicação Visual Interactiva
Chapter - Fundamentals Comunicação Visual Interactiva Structure of the human eye (1) CVI Structure of the human eye () Celular structure of the retina. On the right we can see one cone between two groups
More informationAn introduction to 3D image reconstruction and understanding concepts and ideas
Introduction to 3D image reconstruction An introduction to 3D image reconstruction and understanding concepts and ideas Samuele Carli Martin Hellmich 5 febbraio 2013 1 icsc2013 Carli S. Hellmich M. (CERN)
More informationGeometry of Multiple views
1 Geometry of Multiple views CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Multiple views Despite the wealth of information contained in a a photograph, the depth of a scene point along the
More informationMidterm Exam Solutions
Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow
More informationProjective Geometry and Camera Models
/2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationReflection & Mirrors
Reflection & Mirrors Geometric Optics Using a Ray Approximation Light travels in a straight-line path in a homogeneous medium until it encounters a boundary between two different media A ray of light is
More informationProjective Geometry and Camera Models
Projective Geometry and Camera Models Computer Vision CS 43 Brown James Hays Slides from Derek Hoiem, Alexei Efros, Steve Seitz, and David Forsyth Administrative Stuff My Office hours, CIT 375 Monday and
More informationUnit 3 Multiple View Geometry
Unit 3 Multiple View Geometry Relations between images of a scene Recovering the cameras Recovering the scene structure http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1.html 3D structure from images Recover
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More information521466S Machine Vision Exercise #1 Camera models
52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,
More informationSTEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS
XVI CONGRESO INTERNACIONAL DE INGENIERÍA GRÁFICA STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS BARONE, Sandro; BRUNO, Andrea University of Pisa Dipartimento di Ingegneria
More informationVisual Pathways to the Brain
Visual Pathways to the Brain 1 Left half of visual field which is imaged on the right half of each retina is transmitted to right half of brain. Vice versa for right half of visual field. From each eye
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationLight: Geometric Optics (Chapter 23)
Light: Geometric Optics (Chapter 23) Units of Chapter 23 The Ray Model of Light Reflection; Image Formed by a Plane Mirror Formation of Images by Spherical Index of Refraction Refraction: Snell s Law 1
More informationEpipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz
Epipolar Geometry Prof. D. Stricker With slides from A. Zisserman, S. Lazebnik, Seitz 1 Outline 1. Short introduction: points and lines 2. Two views geometry: Epipolar geometry Relation point/line in two
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Shiv Ram Dubey, IIIT Sri City Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X
More informationComputer Vision I - Algorithms and Applications: Multi-View 3D reconstruction
Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View
More informationProjective geometry for Computer Vision
Department of Computer Science and Engineering IIT Delhi NIT, Rourkela March 27, 2010 Overview Pin-hole camera Why projective geometry? Reconstruction Computer vision geometry: main problems Correspondence
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribe: Sameer Agarwal LECTURE 1 Image Formation 1.1. The geometry of image formation We begin by considering the process of image formation when a
More information3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller
3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More informationSensor technology for mobile robots
Laser application, vision application, sonar application and sensor fusion (6wasserf@informatik.uni-hamburg.de) Outline Introduction Mobile robots perception Definitions Sensor classification Sensor Performance
More informationHow to achieve this goal? (1) Cameras
How to achieve this goal? (1) Cameras History, progression and comparisons of different Cameras and optics. Geometry, Linear Algebra Images Image from Chris Jaynes, U. Kentucky Discrete vs. Continuous
More informationTechnical Basis for optical experimentation Part #4
AerE 545 class notes #11 Technical Basis for optical experimentation Part #4 Hui Hu Department of Aerospace Engineering, Iowa State University Ames, Iowa 50011, U.S.A Light sensing and recording Lenses
More informationImage Transformations & Camera Calibration. Mašinska vizija, 2018.
Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize
More informationFinal Exam Study Guide
Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.
More informationChapters 1 5. Photogrammetry: Definition, introduction, and applications. Electro-magnetic radiation Optics Film development and digital cameras
Chapters 1 5 Chapter 1: Photogrammetry: Definition, introduction, and applications Chapters 2 4: Electro-magnetic radiation Optics Film development and digital cameras Chapter 5: Vertical imagery: Definitions,
More informationPHYSICS. Chapter 34 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT
PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 34 Lecture RANDALL D. KNIGHT Chapter 34 Ray Optics IN THIS CHAPTER, you will learn about and apply the ray model of light Slide 34-2
More informationRectification and Disparity
Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide
More information1 Projective Geometry
CIS8, Machine Perception Review Problem - SPRING 26 Instructions. All coordinate systems are right handed. Projective Geometry Figure : Facade rectification. I took an image of a rectangular object, and
More informationSingle View Geometry. Camera model & Orientation + Position estimation. What am I?
Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing point Mapping from 3D to 2D Point & Line Goal: Point Homogeneous coordinates represent coordinates in 2 dimensions
More informationPerception II: Pinhole camera and Stereo Vision
Perception II: Pinhole camera and Stereo Vision Davide Scaramuzza Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Mobile Robot Control Scheme knowledge, data base mission commands Localization
More informationEpipolar Geometry and Stereo Vision
Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x
More information