Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR


 Oliver Sutton
 9 months ago
 Views:
Transcription
1 Mobile & Service Robotics Sensors for Robotics 3
2 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and forth) It is possible to change the rays direction (2D or 3D measurements) D D Transmitter L Receiver λ = c f ( L + D ) + 2 D = ( L + D ) + θ 2π λ 2
3 Laser sensors λ plitude Amp 0 θ Transmitted Reflected Phase 3
4 Laser sensors METHODS Pulsed laser: direct measurement of time of flight: flight: one shall be able to measure intervals in the picoseconds range Beat frequency between a modulating wave and the reflected wave Phase delay dl It is the easiest implementable method 4
5 Laser sensors c λ = ; D = L+ 2D = L+ f θ 2π λ c = speed of light f = frequency of the moduling wave D = total distance f = 5 MHz; λ = 60 m Theconfidence on distance estimation is inversely proportional to the square value of the received signal amplitude 5
6 Laser sensors A typical image from a rotating mirror laser scanner. Segment lengths are proportional to the measurement uncertainty t 6
7 Triangulation Triangulation i is the process of determining i the location of an object by measuring angles from known points to the object at either end of a fixed known baseline The point can be chosen as the third point of a triangle with one known side and two known angles In practice: Light sheets (or other patterns) are projected on the target Rfl Reflected tdlight is captured dby a linear or 2D matrix ti light sensor Simple trigonometric relations are used to compute the distance 7
8 Triangulation Triangulation concepts l baseline = d d ; d l tan α + tan β = tan α tan β 8
9 Triangulation sin α sin β sin γ = = BC AC AB AC AB sin β = ; BC = sin γ AB sin sin γ α RC = AC sin α RC = BC sin β RC = AB sin α sin γ sin β RC = AB sin α sin sin( α + β ) β 9
10 Triangulation f D Transmitter L x D = L f x f 10
11 Structured light 11
12 Structured light H = Dtan α 12
13 Structured light Monodimensional i case D f cotα u f u x = f Du cot α u α z = f Df cotα u 13
14 Vision Vision is the most important sense in humans Vision includes three steps Data recording and transformation in the retina Data transmission through the optical nerves Data elaboration by the brain 14
15 Natural vision Retina 15
16 Natural vision fmri shows the brain areas interested by neural activity associated to vision Optic chiasm 16
17 Artificial vision Camera = retina Frame grabber = nerves CPU = brain 17
18 Vision sensors: hardware CCD (Coupled Charge Device, light sensitive, e discharging capacitors of 5 to 25 micron) CMOS (Complementary Metal Oxide Semiconductor technology) 18
19 Artificial vision Projection from a 3D world on a 2D plane: perspective projection (transform matrix) Discretization effects due to transducer pixels (CCD or CMOS) Misalignment errors Parallel lines Converging lines Pixel discretization 19
20 Artificial vision π π F 3D object π Optical axis Reversed image plane Focal Plane Principal image plane 20
21 Artificial vision Geometric parameters P R Optical axis m x m O m xc Cc R c f x i i i O i x i t c i i i R i j i P Focal plane π F R R i O i j i π Image plane 21
22 Artificial vision R T A R c m P Several rigid and perspective transformations are involved T B R R π R i Rescaling R i Optical correction 22
23 Artificial vision x x x c i c = z = f c z f x c i k c A i c z P z c C c x i x c P π πf C i π P f f 23
24 Artificial vision R i R i x x y y 24
25 Artificial vision Image parameters O i p x j i p y i i t c i i C i j i 25
26 Artificial vision Aberration types Pincushion distortion i Barrel ldistortioni Radial distortion Non radial distortion (tangential) Radial distortion is modelled by a function D(r) that affects each point v in the projected plane relative to the principal point p, where D(r) is normally a non linear scalar function and p is close to the midpoint of the projected image. Barrel projections are characterized by a positive gradient of the distortion function, whereas pincushion by a negative gradient v = D ( v p ) v + p d 26
27 Artificial vision Image errors Errors are due to the imperfect alignment of pixel elements 27
28 Vision sensors Distance sensors Depth from focus Stereo vision Motion and optical flow 28
29 Depth from focus The method consists in measuring the distance of an object evaluating the focal length adjustment necessary to bring it in focus Short distance focus Medium distance focus Far distance focus 29
30 Depth from focus = + f f D e D L L D (,, xyz) image plane ( x, y ) i i focal plane L ( d + e ) D δ e bx ( ) = 2 f ( d + e) s( x) blur radius shape 30
31 Depth from focus Near focusing Far focusing 31
32 Stereo disparity ( x, y, z) left lens f z x right lens image plane ( x, y ) ( x, y ) r r b baseline (known) 32
33 Stereo disparity Idealized camera geometry for stereo vision x /2 x x + b r x b /2 =, = f z f z x x r b = f z ( x + x )/2 r x = b x x r ( y + y ) r )/2 y = b y y r z b f = x x f ( ) Disparity between two images Depth computation r 33
34 Stereo vision Distance is inversely proportional to disparity closer objects can be measured more accurately Disparity is proportional to baseline For a given disparity error, the accuracy of the depth estimate increases with increasing baseline b However, as b is increased, some objects may appear in one camera, but not in the other A point visible from both cameras produces a conjugate pair Conjugate pairs lie on epipolar line (parallel to the x axis for the arrangement in the figure above) 34
35 Stereo points correspondence These two points are corresponding: how do you find them in the two images? Left image Right image Right Left Disparity 35
36 Epipolar lines P corresponding points stay on the epipolar lines π τ τ 1 2 q q 2 C 1 epipolar lines e 1 Rt, e 2 these two points are known and fixed (they are called epipoles) C 2 36
37 Stereo vision Depth calculation The key problem in stereo vision is how to optimally solve the correspondence problem Corresponding points lie on the epipolar lines Gray Level Matching Match gray level features on corresponding epipolar lines Zero crossing of Laplacian of Gaussians is a widely used approach for identifying the same feature in the left and right images Brightness = image irradiance or intensity I(x,y) is computed and used as shown below 37
38 Laplacian The Laplacian is a 2D isotropic measure of the 2 nd spatial derivative of an image The Laplacian of an image highlights regions of rapid intensity change and is often used for edge detection The Laplacian is often applied to an image that has first been smoothed hdwith ihsomething approximating i a Gaussian smoothing filter, in order to reduce its sensitivity to noise The operator normally takes a single gray level image as input and produces another gray level image as output 38
39 Laplacian The Laplacian L(x,y) of an image with pixel lintensity it values I(x,y) is given by: Lxy (, ) I = + x L = P I 2 2 I y P = 1 G = Convolution operator P 2 =
40 Convolution Convolution is a simple mathematical operation which is fundamental to many image processing operators Convolution multiplies together two arrays of numbers, generally of different sizes, but of the same dimensionality, to produce a third array of numbers of the same dimensionality This can be used in image processing to implement operators whose output pixel values are simple linear combinations of certain input pixel values In an image processing, one of the input arrays is normally just the gray level image. The second array is usually much smaller, and is also two dimensional (although it may be just a single pixel thick), and is known as the kernel 40
41 Convolution 41
42 Convolution matrix Iij (, ) IMAGE Kij (, ) KERNEL If the image has M rows and N columns, and the kernel has m rows and n columns, then the size of the output image will have M  m + 1 rows, and N  n + 1 columns (6 2+ 1) (9 3+ 1) =
43 Convolution product k11 k12 k13 k21 k22 k23 k11 k12 k13 k21 k22 k23 Oi (, j ) m n Oij (, ) = Ii ( + k 1, j+ l 1) Kkl (, ) k= 1 l= 1 i = 1,, ( M m + 1); j = 1,, ( N n + 1) 43
44 Stereo vision Zero crossing of Laplacian of Gaussian Identification of features that are stable and match well Laplacian of intensity image Step/edge detection of noisy image: filter through Gaussian smoothing 44
45 Edge detection 45
46 Stereo vision L R VERTICAL FILTERED IMAGES Confidence image Depth image 46
47 Optical flow Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer (an eye or a camera) and the scene Optical flow techniques such as motion detection, object segmentation, time to collision and focus of expansion calculations, motion compensated encoding, and stereo disparity measurement utilize this motion of the objects surfaces, and edges 47
48 Optical flow The optical flow methods try to calculate the motion between two image frames which are taken at times t and t + δt at every voxel position. These methods are called differential since they are based on local ltaylor series approximations of the image signal, i.e., they use partial derivatives with respect to the spatial and temporal coordinates Ixyt (,,) = Ix ( + δxy, + δyt, + δt) I I I = Ixyt (,, ) + δ x + δ y + δ t +... x y t I I I δx + δy + δt = x y t 0 Avoxel(volumetric + pixel) is a volume element representing a value on aregular grid in 3D space 48
49 Optical flow I I V + V + I = 0 x y x y t T I V = I t This problem is known as the aperture problem of the optical flow algorithms There is only one equation in two unknowns and therefore cannot be solved To find the optical flow another set of equations is needed, given by some additional constraint. All optical flow methods introduce additional conditions for estimating the actual flow. 49
50 Optical flow Lucas Kanade Optical Flow Method A twoframe differential methods for motion estimation The additional constraints needed for the estimation of the flow are introduced in this method by assuming that the flow V, V is constant x y in a small window of size with m > 1 which is centered at pixel (, xy) 2 Numbering the pixels as a set of equations can be found 1,, n = m I V + I V = I x1 x y1 y t I I I x1 1 t 1 y 1 I V + I V = I I I V I x2 x y2 y t 2 x2 y2 x t2 = V y I V + I V = I I I I xn x yn y t xn yn t n n 1 ( T T Ax = b x = AAA ) Ab 50
Basilio Bona DAUIN Politecnico di Torino
ROBOTICA 03CFIOR DAUIN Politecnico di Torino Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement > called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infrared sensors
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationChapter 2  Fundamentals. Comunicação Visual Interactiva
Chapter  Fundamentals Comunicação Visual Interactiva Structure of the human eye (1) CVI Structure of the human eye () Celular structure of the retina. On the right we can see one cone between two groups
More informationIntroduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation
Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic
More information3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,
3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective
More information521466S Machine Vision Exercise #1 Camera models
52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,
More informationMidterm Exam Solutions
Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationRectification and Disparity
Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide
More informationRecap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?
Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UTAustin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple
More informationMachine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy
1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/
More informationLIGHT & OPTICS. Fundamentals of Physics 2112 Chapter 34 1
LIGHT & OPTICS Fundamentals of Physics 22 Chapter 34 Chapter 34 Images. Two Types of Images 2. Plane Mirrors 3. Spherical Mirrors 4. Images from Spherical Mirrors 5. Spherical Refracting Surfaces 6. Thin
More informationSensor Modalities. Sensor modality: Different modalities:
Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature
More informationChapter 23. Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian
Chapter 23 Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian Reflection and Refraction at a Plane Surface The light radiate from a point object in all directions The light reflected from a plane
More informationComparison between Motion Analysis and Stereo
MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis
More information3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller
3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.unikl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationCorrespondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong
More informationLast update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1
Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus
More informationCV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more
CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured
More informationDepth. Chapter Stereo Imaging
Chapter 11 Depth Calculating the distance of various points in the scene relative to the position of the camera is one of the important tasks for a computer vision system. A common method for extracting
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Righthanded vs. lefthanded x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationCS664 Lecture #18: Motion
CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use
More informationx 2 + y 2 + z 2 = 1 = ˆr ŷ = ±y cosθ z (a) The half angle of the cones (inside the material) is just given by the critical angle sinθ c n = 3.
Exercise.6 The results of this problem are somewhat general and apply to any rectangular parallelepiped with source located at any position inside. One can see this as follows. The direction of an arbitrary
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More informationL2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods
L2 Data Acquisition Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods 1 Coordinate Measurement Machine Touch based Slow Sparse Data Complex planning Accurate 2
More informationThere are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...
STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationEdge and local feature detection  2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationBiometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html
More informationMEFT / Quantum Optics and Lasers. Suggested problems from Fundamentals of Photonics Set 1 Gonçalo Figueira
MEFT / Quantum Optics and Lasers Suggested problems from Fundamentals of Photonics Set Gonçalo Figueira. Ray Optics.3) AberrationFree Imaging Surface Determine the equation of a convex aspherical nonspherical)
More informationDiffraction. Singleslit diffraction. Diffraction by a circular aperture. Chapter 38. In the forward direction, the intensity is maximal.
Diffraction Chapter 38 Huygens construction may be used to find the wave observed on the downstream side of an aperture of any shape. Diffraction The interference pattern encodes the shape as a Fourier
More informationImage Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments
Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features
More informationCamera Model and Calibration
Camera Model and Calibration Lecture10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationCS4733 Class Notes, Computer Vision
CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos  http://www.dai.ed.ac.uk/hipr and Computer Vision resources online  http://www.dai.ed.ac.uk/cvonline Vision
More informationSensor technology for mobile robots
Laser application, vision application, sonar application and sensor fusion (6wasserf@informatik.unihamburg.de) Outline Introduction Mobile robots perception Definitions Sensor classification Sensor Performance
More informationOptic Flow and Basics Towards HornSchunck 1
Optic Flow and Basics Towards HornSchunck 1 Lecture 7 See Section 4.1 and Beginning of 4.2 in Reinhard Klette: Concise Computer Vision SpringerVerlag, London, 2014 1 See last slide for copyright information.
More informationChapters 1 7: Overview
Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter
More informationRobotics  Projective Geometry and Camera model. Marcello Restelli
Robotics  Projective Geometr and Camera model Marcello Restelli marcello.restelli@polimi.it Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano Ma 2013 Inspired from Matteo
More informationIMAGING. Images are stored by capturing the binary data using some electronic devices (SENSORS)
IMAGING Film photography Digital photography Images are stored by capturing the binary data using some electronic devices (SENSORS) Sensors: Charge Coupled Device (CCD) Photo multiplier tube (PMT) The
More informationThe 2D/3D Differential Optical Flow
The 2D/3D Differential Optical Flow Prof. John Barron Dept. of Computer Science University of Western Ontario London, Ontario, Canada, N6A 5B7 Email: barron@csd.uwo.ca Phone: 5196612111 x86896 Canadian
More informationThreeDimensional Sensors Lecture 2: ProjectedLight Depth Cameras
ThreeDimensional Sensors Lecture 2: ProjectedLight Depth Cameras Radu Horaud INRIA Grenoble RhoneAlpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationProjective Geometry and Camera Models
Projective Geometry and Camera Models Computer Vision CS 43 Brown James Hays Slides from Derek Hoiem, Alexei Efros, Steve Seitz, and David Forsyth Administrative Stuff My Office hours, CIT 375 Monday and
More informationLecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013
Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today:  Capturing scene depth
More informationProjective Geometry and Camera Models
/2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:
More informationJump Stitch Metadata & Depth Maps Version 1.1
Jump Stitch Metadata & Depth Maps Version 1.1 jumphelp@google.com Contents 1. Introduction 1 2. Stitch Metadata File Format 2 3. Coverage Near the Poles 4 4. Coordinate Systems 6 5. Camera Model 6 6.
More informationMiniature faking. In closeup photo, the depth of field is limited.
Miniature faking In closeup photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tiltshift_miniature_greg_keene.jpg
More informationImage Rectification (Stereo) (New book: 7.2.1, old book: 11.1)
Image Rectification (Stereo) (New book: 7.2.1, old book: 11.1) Guido Gerig CS 6320 Spring 2013 Credits: Prof. Mubarak Shah, Course notes modified from: http://www.cs.ucf.edu/courses/cap6411/cap5415/, Lecture
More informationDepth Sensors Kinect V2 A. Fornaser
Depth Sensors Kinect V2 A. Fornaser alberto.fornaser@unitn.it Vision Depth data It is not a 3D data, It is a map of distances Not a 3D, not a 2D it is a 2.5D or Perspective 3D Complete 3D  Tomography
More informationSubpixel accurate refinement of disparity maps using stereo correspondences
Subpixel accurate refinement of disparity maps using stereo correspondences Matthias Demant Lehrstuhl für Mustererkennung, Universität Freiburg Outline 1 Introduction and Overview 2 Refining the Cost Volume
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationChapter 18. Geometric Operations
Chapter 18 Geometric Operations To this point, the image processing operations have computed the gray value (digital count) of the output image pixel based on the gray values of one or more input pixels;
More informationOptics II. Reflection and Mirrors
Optics II Reflection and Mirrors Geometric Optics Using a Ray Approximation Light travels in a straightline path in a homogeneous medium until it encounters a boundary between two different media The
More informationHandEye Calibration from Image Derivatives
HandEye Calibration from Image Derivatives Abstract In this paper it is shown how to perform handeye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationTimeofflight basics
Contents 1. Introduction... 2 2. Glossary of Terms... 3 3. Recovering phase from crosscorrelation... 4 4. Timeofflight operating principle: the lockin amplifier... 6 5. The timeofflight sensor pixel...
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationDepth from two cameras: stereopsis
Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationDiffraction: Propagation of wave based on Huygens s principle.
Diffraction: In addition to interference, waves also exhibit another property diffraction, which is the bending of waves as they pass by some objects or through an aperture. The phenomenon of diffraction
More informationDepth Camera for Mobile Devices
Depth Camera for Mobile Devices Instructor  Simon Lucey 16423  Designing Computer Vision Apps Today Stereo Cameras Structured Light Cameras Time of Flight (ToF) Camera Inferring 3D Points Given we have
More informationStereo: Disparity and Matching
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To
More informationBIL Computer Vision Apr 16, 2014
BIL 719  Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More information1 Laboratory #4: DivisionofWavefront Interference
10514550073, Physical Optics 1 Laboratory #4: DivisionofWavefront Interference 1.1 Theory Recent labs on optical imaging systems have used the concept of light as a ray in goemetrical optics to model
More informationThe image is virtual and erect. When a mirror is rotated through a certain angle, the reflected ray is rotated through twice this angle.
1 Class XII: Physics Chapter 9: Ray optics and Optical Instruments Top Concepts 1. Laws of Reflection. The reflection at a plane surface always takes place in accordance with the following two laws: (i)
More informationLecture 14: Basic MultiView Geometry
Lecture 14: Basic MultiView Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram
More informationThree Main Themes of Computer Graphics
Three Main Themes of Computer Graphics Modeling How do we represent (or model) 3D objects? How do we construct models for specific objects? Animation How do we represent the motion of objects? How do
More informationCITS 4402 Computer Vision
CITS 4402 Computer Vision Prof Ajmal Mian Lecture 12 3D Shape Analysis & Matching Overview of this lecture Revision of 3D shape acquisition techniques Representation of 3D data Applying 2D image techniques
More informationChapter 26 Geometrical Optics
Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray
More informationStereo vision. Many slides adapted from Steve Seitz
Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is
More informationModule 4F12: Computer Vision and Robotics Solutions to Examples Paper 2
Engineering Tripos Part IIB FOURTH YEAR Module 4F2: Computer Vision and Robotics Solutions to Examples Paper 2. Perspective projection and vanishing points (a) Consider a line in 3D space, defined in cameracentered
More informationReminder: Lecture 20: The EightPoint Algorithm. Essential/Fundamental Matrix. E/F Matrix Summary. Computing F. Computing F from Point Matches
Reminder: Lecture 20: The EightPoint Algorithm F = 0.003106950.0025646 2.965840.0280940.00771621 56.3813 13.190529.20079999.79 Readings T&V 7.3 and 7.4 Essential/Fundamental Matrix E/F Matrix Summary
More informationOperation of machine vision system
ROBOT VISION Introduction The process of extracting, characterizing and interpreting information from images. Potential application in many industrial operation. Selection from a bin or conveyer, parts
More informationAdvanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation
Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or
More informationControl of Light. Emmett Ientilucci Digital Imaging and Remote Sensing Laboratory Chester F. Carlson Center for Imaging Science 8 May 2007
Control of Light Emmett Ientilucci Digital Imaging and Remote Sensing Laboratory Chester F. Carlson Center for Imaging Science 8 May 007 Spectroradiometry Spectral Considerations Chromatic dispersion
More informationVision 3D articielle Disparity maps, correlation
Vision 3D articielle Disparity maps, correlation Pascal Monasse monasse@imagine.enpc.fr IMAGINE, École des Ponts ParisTech http://imagine.enpc.fr/~monasse/stereo/ Contents Triangulation Epipolar rectication
More informationAdvanced Stamping Manufacturing Engineering, Auburn Hills, MI
RECENT DEVELOPMENT FOR SURFACE DISTORTION MEASUREMENT L.X. Yang 1, C.Q. Du 2 and F. L. Cheng 2 1 Dep. of Mechanical Engineering, Oakland University, Rochester, MI 2 DaimlerChrysler Corporation, Advanced
More informationAgenda. Perspective projection. Rotations. Camera models
Image formation Agenda Perspective projection Rotations Camera models Light as a wave + particle Light as a wave (ignore for now) Refraction Diffraction Image formation Digital Image Film Human eye Pixel
More informationRuch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS
Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska 1 Krzysztof Krawiec IDSS 2 The importance of visual motion Adds entirely new (temporal) dimension to visual
More informationChapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.
Chapter 32 Light: Reflection and Refraction Units of Chapter 32 The Ray Model of Light Reflection; Image Formation by a Plane Mirror Formation of Images by Spherical Mirrors Index of Refraction Refraction:
More informationChapter 26 Geometrical Optics
Chapter 26 Geometrical Optics 1 Overview of Chapter 26 The Reflection of Light Forming Images with a Plane Mirror Spherical Mirrors Ray Tracing and the Mirror Equation The Refraction of Light Ray Tracing
More informationNoise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions
Others  Noise Removal Techniques  Edge Detection Techniques  Geometric Operations  Color Image Processing  Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images
More informationImage Warping. Srikumar Ramalingam School of Computing University of Utah. [Slides borrowed from Ross Whitaker] 1
Image Warping Srikumar Ramalingam School of Computing University of Utah [Slides borrowed from Ross Whitaker] 1 Geom Trans: Distortion From Optics Barrel Distortion Pincushion Distortion Straight lines
More informationIntroduction to Homogeneous coordinates
Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically
More informationBasic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues
Distance, depth, and 3D shape cues Pictorial depth cues: familiar size, relative size, brightness, occlusion, shading and shadows, aerial/ atmospheric perspective, linear perspective, height within image,
More informationComputer and Machine Vision
Computer and Machine Vision Lecture Week 11 Part2 Segmentation, Camera Calibration and Feature Alignment March 28, 2014 Sam Siewert Outline of Week 11 Exam #1 Results Overview and Solutions Wrap up of
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationCamera Model and Calibration. Lecture12
Camera Model and Calibration Lecture12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationEpipolar Geometry in Stereo, Motion and Object Recognition
Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA SophiaAntipolis,
More informationPanoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints
Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints Wei Jiang Japan Science and Technology Agency 418, Honcho, Kawaguchishi, Saitama, Japan jiang@anken.go.jp
More informationHandEye Calibration from Image Derivatives
HandEye Calibration from Image Derivatives Henrik Malm, Anders Heyden Centre for Mathematical Sciences, Lund University Box 118, SE221 00 Lund, Sweden email: henrik,heyden@maths.lth.se Abstract. In this
More informationAgenda. Rotations. Camera calibration. Homography. Ransac
Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr
More informationChapter 3: Intensity Transformations and Spatial Filtering
Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing
More information