Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Similar documents
Basilio Bona DAUIN Politecnico di Torino

Range Sensors (time of flight) (1)

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

Capturing, Modeling, Rendering 3D Structures

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

Stereo imaging ideal geometry

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Chapter 2 - Fundamentals. Comunicação Visual Interactiva

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

MAPI Computer Vision. Multiple View Geometry

Cameras and Stereo CSE 455. Linda Shapiro

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Rectification and Distortion Correction

Visual Pathways to the Brain

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,

521466S Machine Vision Exercise #1 Camera models

EE795: Computer Vision and Intelligent Systems

Midterm Exam Solutions

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Rectification and Disparity

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Chapters 1 5. Photogrammetry: Definition, introduction, and applications. Electro-magnetic radiation Optics Film development and digital cameras

Computer Vision Lecture 17

Computer Vision Lecture 17

Understanding Variability

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Chapter 3 Geometric Optics

Omni Stereo Vision of Cooperative Mobile Robots

Stereo Vision. MAN-522 Computer Vision

Sensor Modalities. Sensor modality: Different modalities:

Digital Image Processing COSC 6380/4393

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Representing the World

LIGHT & OPTICS. Fundamentals of Physics 2112 Chapter 34 1

CSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.

Comparison between Motion Analysis and Stereo

Computer Vision cmput 428/615

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

Final Exam Study Guide

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

CS201 Computer Vision Camera Geometry

Chapter 23. Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Chapters 1 5. Photogrammetry: Definition, introduction, and applications. Electro-magnetic radiation Optics Film development and digital cameras

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

3D Geometry and Camera Calibration

CSE 252B: Computer Vision II

Outline The Refraction of Light Forming Images with a Plane Mirror 26-3 Spherical Mirror 26-4 Ray Tracing and the Mirror Equation

EXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003,

Depth. Chapter Stereo Imaging

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

THE VIEWING TRANSFORMATION

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

An introduction to 3D image reconstruction and understanding concepts and ideas

Outline. ETN-FPI Training School on Plenoptic Sensing

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

Dense 3D Reconstruction. Christiano Gava

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Mech 296: Vision for Robotic Applications. Today s Summary

Geometry of Multiple views

Information page for written examinations at Linköping University TER2

Introduction to 3D Machine Vision

x 2 + y 2 + z 2 = 1 = ˆr ŷ = ±y cosθ z (a) The half angle of the cones (inside the material) is just given by the critical angle sinθ c n = 3.

Practical Robotics (PRAC)

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Measurements using three-dimensional product imaging

Epipolar Geometry and the Essential Matrix

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

Flexible Calibration of a Portable Structured Light System through Surface Plane

Outline. ETN-FPI Training School on Plenoptic Sensing

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

IMPORTANT INSTRUCTIONS

Available online at Procedia Engineering 7 (2010) Procedia Engineering 00 (2010)

Super-resolution on Text Image Sequences

Optic Flow and Basics Towards Horn-Schunck 1

Edge and local feature detection - 2. Importance of edge detection in computer vision

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy

Final Review CMSC 733 Fall 2014

MEFT / Quantum Optics and Lasers. Suggested problems from Fundamentals of Photonics Set 1 Gonçalo Figueira

CS664 Lecture #18: Motion

Dense 3D Reconstruction. Christiano Gava

Application questions. Theoretical questions

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Digital Image Processing COSC 6380/4393

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

Transcription:

Mobile & Service Robotics Sensors for Robotics 3

Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and forth) It is possible to change the rays direction (2D or 3D measurements) D D Transmitter L Receiver λ = c f ( L + D ) + 2 D = ( L + D ) + θ 2π λ 2

Laser sensors λ plitude Amp 0 θ Transmitted Reflected Phase 3

Laser sensors METHODS Pulsed laser: direct measurement of time of flight: flight: one shall be able to measure intervals in the picoseconds range Beat frequency between a modulating wave and the reflected wave Phase delay dl It is the easiest implementable method 4

Laser sensors c λ = ; D = L+ 2D = L+ f θ 2π λ c = speed of light f = frequency of the moduling wave D = total distance f = 5 MHz; λ = 60 m Theconfidence on distance estimation is inversely proportional to the square value of the received signal amplitude 5

Laser sensors A typical image from a rotating mirror laser scanner. Segment lengths are proportional to the measurement uncertainty t 6

Triangulation Triangulation i is the process of determining i the location of an object by measuring angles from known points to the object at either end of a fixed known baseline The point can be chosen as the third point of a triangle with one known side and two known angles In practice: Light sheets (or other patterns) are projected on the target Rfl Reflected tdlight is captured dby a linear or 2D matrix ti light sensor Simple trigonometric relations are used to compute the distance 7

Triangulation Triangulation concepts l baseline = d d ; d l tan α + tan β = 1 1 + tan α tan β 8

Triangulation sin α sin β sin γ = = BC AC AB AC AB sin β = ; BC = sin γ AB sin sin γ α RC = AC sin α RC = BC sin β RC = AB sin α sin γ sin β RC = AB sin α sin sin( α + β ) β 9

Triangulation f D Transmitter L x D = L f x f 10

Structured light 11

Structured light H = Dtan α 12

Structured light Monodimensional i case D f cotα u f u x = f Du cot α u α z = f Df cotα u 13

Vision Vision is the most important sense in humans Vision includes three steps Data recording and transformation in the retina Data transmission through the optical nerves Data elaboration by the brain 14

Natural vision Retina 15

Natural vision fmri shows the brain areas interested by neural activity associated to vision Optic chiasm 16

Artificial vision Camera = retina Frame grabber = nerves CPU = brain 17

Vision sensors: hardware CCD (Coupled Charge Device, light sensitive, e discharging capacitors of 5 to 25 micron) CMOS (Complementary Metal Oxide Semiconductor technology) 18

Artificial vision Projection from a 3D world on a 2D plane: perspective projection (transform matrix) Discretization effects due to transducer pixels (CCD or CMOS) Misalignment errors Parallel lines Converging lines Pixel discretization 19

Artificial vision π π F 3D object π Optical axis Reversed image plane Focal Plane Principal image plane 20

Artificial vision Geometric parameters P R Optical axis m x m O m xc Cc R c f x i i i O i x i t c i i i R i j i P Focal plane π F R R i O i j i π Image plane 21

Artificial vision R T A R c m P Several rigid and perspective transformations are involved T B R R π R i Rescaling R i Optical correction 22

Artificial vision x x x c i c = z = f c z f x c i k c A i c z P z c C c x i x c P π πf C i π P f f 23

Artificial vision R i R i x x y y 24

Artificial vision Image parameters O i p x j i p y i i t c i i C i j i 25

Artificial vision Aberration types Pincushion distortion i Barrel ldistortioni Radial distortion Non radial distortion (tangential) Radial distortion is modelled by a function D(r) that affects each point v in the projected plane relative to the principal point p, where D(r) is normally a non linear scalar function and p is close to the midpoint of the projected image. Barrel projections are characterized by a positive gradient of the distortion function, whereas pincushion by a negative gradient v = D ( v p ) v + p d 26

Artificial vision Image errors Errors are due to the imperfect alignment of pixel elements 27

Vision sensors Distance sensors Depth from focus Stereo vision Motion and optical flow 28

Depth from focus The method consists in measuring the distance of an object evaluating the focal length adjustment necessary to bring it in focus Short distance focus Medium distance focus Far distance focus 29

Depth from focus 1 1 1 = + f f D e D L L D (,, xyz) image plane ( x, y ) i i focal plane L ( d + e ) D 1 1 1 δ e bx ( ) = 2 f ( d + e) s( x) blur radius shape 30

Depth from focus Near focusing Far focusing 31

Stereo disparity ( x, y, z) left lens f z x right lens image plane ( x, y ) ( x, y ) r r b baseline (known) 32

Stereo disparity Idealized camera geometry for stereo vision x /2 x x + b r x b /2 =, = f z f z x x r b = f z ( x + x )/2 r x = b x x r ( y + y ) r )/2 y = b y y r z b f = x x f ( ) Disparity between two images Depth computation r 33

Stereo vision Distance is inversely proportional to disparity closer objects can be measured more accurately Disparity is proportional to baseline For a given disparity error, the accuracy of the depth estimate increases with increasing baseline b However, as b is increased, some objects may appear in one camera, but not in the other A point visible from both cameras produces a conjugate pair Conjugate pairs lie on epipolar line (parallel to the x axis for the arrangement in the figure above) 34

Stereo points correspondence These two points are corresponding: how do you find them in the two images? Left image Right image Right Left Disparity 35

Epipolar lines P corresponding points stay on the epipolar lines π τ τ 1 2 q 1 1 2 q 2 C 1 epipolar lines e 1 Rt, e 2 these two points are known and fixed (they are called epipoles) C 2 36

Stereo vision Depth calculation The key problem in stereo vision is how to optimally solve the correspondence problem Corresponding points lie on the epipolar lines Gray Level Matching Match gray level features on corresponding epipolar lines Zero crossing of Laplacian of Gaussians is a widely used approach for identifying the same feature in the left and right images Brightness = image irradiance or intensity I(x,y) is computed and used as shown below 37

Laplacian The Laplacian is a 2D isotropic measure of the 2 nd spatial derivative of an image The Laplacian of an image highlights regions of rapid intensity change and is often used for edge detection The Laplacian is often applied to an image that has first been smoothed hdwith ihsomething approximating i a Gaussian smoothing filter, in order to reduce its sensitivity to noise The operator normally takes a single gray level image as input and produces another gray level image as output 38

Laplacian The Laplacian L(x,y) of an image with pixel lintensity it values I(x,y) is given by: Lxy (, ) I = + x L = P I 2 2 I y 2 2 0 1 0 1 2 1 P 1 4 1 1 = 1 G = 2 4 2 0 1 0 Convolution 16 1 2 1 operator 1 1 1 P 2 = 1 8 1 1 1 1 39

Convolution Convolution is a simple mathematical operation which is fundamental to many image processing operators Convolution multiplies together two arrays of numbers, generally of different sizes, but of the same dimensionality, to produce a third array of numbers of the same dimensionality This can be used in image processing to implement operators whose output pixel values are simple linear combinations of certain input pixel values In an image processing, one of the input arrays is normally just the gray level image. The second array is usually much smaller, and is also two dimensional (although it may be just a single pixel thick), and is known as the kernel 40

Convolution 41

Convolution matrix Iij (, ) 11 12 13 14 15 16 17 18 19 21 22 23 24 25 26 27 28 29 31 32 33 34 35 36 37 38 39 41 42 43 44 45 46 47 48 49 51 52 53 54 55 56 57 58 59 61 62 63 64 65 66 67 68 69 IMAGE Kij (, ) 11 12 13 21 22 23 KERNEL If the image has M rows and N columns, and the kernel has m rows and n columns, then the size of the output image will have M - m + 1 rows, and N - n + 1 columns (6 2+ 1) (9 3+ 1) = 5 7 42

Convolution product k11 k12 k13 k21 k22 k23 k11 k12 k13 k21 k22 k23 Oi (, j ) m n Oij (, ) = Ii ( + k 1, j+ l 1) Kkl (, ) k= 1 l= 1 i = 1,, ( M m + 1); j = 1,, ( N n + 1) 43

Stereo vision Zero crossing of Laplacian of Gaussian Identification of features that are stable and match well Laplacian of intensity image Step/edge detection of noisy image: filter through Gaussian smoothing 44

Edge detection 45

Stereo vision L R VERTICAL FILTERED IMAGES Confidence image Depth image 46

Optical flow Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer (an eye or a camera) and the scene Optical flow techniques such as motion detection, object segmentation, time to collision and focus of expansion calculations, motion compensated encoding, and stereo disparity measurement utilize this motion of the objects surfaces, and edges 47

Optical flow The optical flow methods try to calculate the motion between two image frames which are taken at times t and t + δt at every voxel position. These methods are called differential since they are based on local ltaylor series approximations of the image signal, i.e., they use partial derivatives with respect to the spatial and temporal coordinates Ixyt (,,) = Ix ( + δxy, + δyt, + δt) I I I = Ixyt (,, ) + δ x + δ y + δ t +... x y t I I I δx + δy + δt = x y t 0 Avoxel(volumetric + pixel) is a volume element representing a value on aregular grid in 3D space 48

Optical flow I I V + V + I = 0 x y x y t T I V = I t This problem is known as the aperture problem of the optical flow algorithms There is only one equation in two unknowns and therefore cannot be solved To find the optical flow another set of equations is needed, given by some additional constraint. All optical flow methods introduce additional conditions for estimating the actual flow. 49

Optical flow Lucas Kanade Optical Flow Method A two-frame differential methods for motion estimation The additional constraints needed for the estimation of the flow are introduced in this method by assuming that the flow V, V is constant x y in a small window of size with m > 1 which is centered at pixel (, xy) 2 Numbering the pixels as a set of equations can be found 1,, n = m I V + I V = I x1 x y1 y t I I I x1 1 t 1 y 1 I V + I V = I I I V I x2 x y2 y t 2 x2 y2 x t2 = V y I V + I V = I I I I xn x yn y t xn yn t n n 1 ( T T Ax = b x = AAA ) Ab 50