Mech 296: Vision for Robotic Applications. Today s Summary
|
|
- Prosper Kelley
- 5 years ago
- Views:
Transcription
1 Mech 296: Vision for Robotic Applications Gravity Probe B Lecture 3: Visual Sensing 3.1 Today s Summary 1. Visual Signal: Position in Time Acquire Process Obtain Camera State 2. Examples Star Tracker Jellyfish tracking 3.2 1
2 Color Video x Color y Time 3.3 Visual Positioning Signal Vision Sensor Video Hardware Acquire Process Obtain Camera State Position Signal 3.4 2
3 Video Acquisition Goal: Visual sensing requires manipulation of video with a digital computer Video Acquisition Depends on camera output type Analog Video framegrabber card required Ex: Plug NTSC signal into Matrox Meteor II board Framegrabber digitizes analog signal and outputs the digital frame into a memory buffer Digital Video direct interface Video hardware attached to computer via digital input port Digital frames loaded directly into memory (USB, Firewire) or through dedicated framegrabber (Camera Link) OR Digital frames loaded from a video file on hard disk 3.5 Analog Video Transmission Analog Video is uncompressed Common television transmission standards include NTSC (North America) and PALS (Europe) NTSC format Frames arrive at a rate of Hz Full images are constructed at Hz using interlaced lines from even and odd frames A full image consists of 525 lines Data is transmitted on approximately 480 lines Bandwidth results in an effective horizontal resolution of approximately 640 lines Color resolution is approximately half grayscale resolution and uses YIQ (also known as LUV) color space Analog video transmission is susceptible to RF interference 3.6 3
4 Interlaced Analog Video 3.7 Digital Video Compression Digital Video is compressed based on time differencing A digital video may consist of N frames The compressed video is smaller than N compressed images Compression Method: Use time differencing Keyframe: reference image Stored periodically Compressed in same manner as conventional images Intermediate frames Differenced from most recent keyframe Difference image generally contains little new information content Compression rate for intermediate frames is very high Several video standards exist for digital video File formats: MPEG, AVI, Quicktime Compression method: Microsoft Video, Cinepak, Indeo 3.8 4
5 Video Acquisition in Matlab (I) Matlab offers Acquistion Toolbox imaqhwinfo: query available video devices Ex List video devices vidinfo = imaqhwinfo; %hardware info vidinfo.installedadaptors %available inputs Ex Obtain device ID number vidinfo = imaqhwinfo( winvideo ); vidinfo.deviceids %winvideo device ID videoinput: intialize video device Ex Initialize winvideo (USB) device with ID = 1 vidobject = videoinput( winvideo,1); 3.9 Video Acquisition in Matlab (II) getsnapshot: obtain single video image (slow ~ 1 Hz) Ex Get image, im1, from video object, vidobject im1 = getsnapshot(vidobject); %get frame image(im1); %display to screen trigger: obtain one or more frames (faster > 1 Hz) Ex Get image, im1, from video object, vidobject % Configure the video object so that one image % is acquired for each trigger ( FramesPerTrigger ), and so % that the maximum number of frames acquired ( TriggerRepeat ) % is not restricted set(vidobject,'framespertrigger',1); set(vidobject,'triggerrepeat',inf); % Configure the video object to acquire images based on a % software request ( Manual ) rather than an external signal triggerconfig(vidobject, 'Manual'); % Signal the video object to begin waiting for a trigger signal start(vidobject); % Trigger image acquisition trigger(vidobject); im1 = getdata(vidobject,1); %Software trigger %Extract one frame from video buffer
6 Video Acquisition in Matlab (III) Matlab offers Acquistion Toolbox flushdata: clear frames from memory. It is necessary to use this command for visual sensing, otherwise Matlab will eventually run out of memory. Ex Clear most recently acquired frame flushdata(vidobject,'trigger'); % flush buffer imaqmem % get buffer stats stop: signal device to stop accepting triggers Ex On exit, the video object should be halted and cleared stop(vidobject); % halt the object flushdata(vidojbect); % clear the video buffer delete(vidobject); % delete the object 3.11 Matlab Acquistion Sample % Initialize video object vidobject = videoinput('winvideo',1); set(vidobject,'framespertrigger',1); set(vidobject,'triggerrepeat',inf); triggerconfig(vidobject, 'Manual'); start(vidobject); % Begin acquistion loop for n = 1:10000 tic % Begin timing trigger(vidobject); % Trigger video device im1 = getdata(vidobject,1); % Acquire image image(im1); % Displaying video makes loop slow pause(.01); % Introduce pause to force display flushdata(vidobject,'trigger'); % Clear video buffer toc % Time elapsed since tic end % Close video object stop(vidobject); flushdata(vidojbect); delete(vidobject);
7 Reading Movie File into Matlab Instead of working with video in real-time, it is also possible to post-process video files aviinfo: obtain information on an AVI movie file Ex Look up frame rate for an AVI, sample.avi movieinfo = aviinfo('sample.avi'); %get stats movieinfo.framespersecond %Display frame rate aviread: read avi frames into matlab structure Ex Extract and display first ten frames from sample.avi moviearray = aviread('sample.avi',1:10); for n=1:10 image(moviearray(n).cdata); %Display frames pause(.01); end 3.13 Visual Positioning Signal Vision Sensor Video Hardware Acquire Process Obtain Camera State Position Signal
8 Segmentation Using Time Difference Segmentation is the process of dividing an image into smaller regions (segments) Lecture 1: Pixel intensity Lecture 2: Pixel color This Lecture: Pixel temporal changes Difference between successive images in a time series Background difference between an image in a time series and a reference image 3.15 Comparison: Differencing Time Difference = g(t) - g(t -1) Background Difference = g(t) - g(0) t = 0 t = 1 t = 2 t = 3 t = 4 Example MECH 296: uses green Vision images for Robotics only Threshold Dr. Jason is Rife applied to the absolute value of the image difference
9 Visual Positioning Signal Vision Sensor Video Hardware Acquire Process Obtain Camera State Position Signal 3.17 Obtaining Camera State Camera pixel data is inherently 2D Pixels locations may be interpreted as Bearings (angles relative to camera axis) or Planar positions 3D point positions may be calculated by combining information from multiple pixels Using a reference of known geometry viewed in a single image dots from parallel lasers on flat surface (for size scaling & orientation) pattern orientation (for camera angle relative to a scene) Using multiple cameras to view a scene of unknown geometry (stereo vision) 6 DoF camera position can be calculated by combining observations of multiple points in the environment over time In this class focus on 2D information
10 Pinhole Camera Model Simplest camera model: pinpoint aperture Pinhole Cameras Dark images (little light enters pinhole) Infinite depth of focus Lens Cameras Brighter images (bigger aperture) Focus depends on distance of object from camera Plane Object D Planar Positioning Points in space are mapped onto image plane For objects lying on a plane parallel to the image plane, pixel position differences are proportional to actual physical distances For some alignment tasks, it is sufficient to express relative distances in pixel units Absolute distance (in meters) can be determined by calibration, if the distance between the camera and the plane of the objects is known For objects lying in a plane at an oblique angle relative to the image plane, pixel distances are foreshortened by perspective
11 2D Planar Positioning Application Example: Overhead Vision System Δy Δx D Bearing Pixels measurements describe the angle between an observed point and the camera centerline This angular relationship always applies, even if observed points are not coplanar Zenith angle, φ, describes angle relative to camera axis Azimuth angle, θ, describes angle around camera axis z x-x 0 φ θ
12 2D Planar Positioning Application Example: Warehouse Inventory ( x x y y ) θ = atan 2, 0 0 θ φ 2 2 d x x0 + y y0 = tanφ tanφ d ( ) ( ) Pixel distance between Constant of object and origin assuming proportionality square, MECH even 296: pixels Vision for Robotics 3.23 Lens Distortion Actual lenses have finite thickness Lens patterns result in systematic errors in vision position measurements (planar position or bearing) Pincushion Distortion No Distortion Barrel Distortion
13 Lens Distortion Calibration Calibration Target Perpendicular to camera axis Located at 15 from camera Grid lines at 2 intervals A simple calibration procedure for lens distortion: Choose points in image (i.e. intersections of grid lines) and measure two pieces of data 1. Gridline position 2. Pixel position Compute radial distance of each point relative to the image center point (assuming it aligns with the lens center) Perform polynomial fit to relate gridline radial position to pixel radial position 3.25 Polynomial Fit for Radial Distortion 1. For each data point, relate grid measurement to pixel measurement through unknown polynomial weights C 1, C 2, etc. 2. Solve for unknown coefficients with least squares method 3. Evaluate residual and adjust centerpoint if necessary to minimize residual, δ, (and compensate for misalignment of imaging array relative to lens optics) Polynomial fit: r = Cr + C r + C r grid, k 1 pixel, k 2 pixel, k 3 pixel, k Least Squares: c = A\ b Residual: rpixel, corr = Ac δ = r b pixel, corr 2 3 rpixel,1 rpixel,1 r pixel,1 2 3 rpixel,2 rpixel,2 rpixel,2 A =, 2 3 rpixel, N rpixel, N rpixel, N rgrid,1 C1 r grid,2 b =, c = C 2 C 3 r grid, N Corrected (undistorted) measurement
14 Scale-Factor Calibration After radial distortion calibration complete, it is possible to calibrate pinhole geometry The remaining scale factor (assuming square pixels) is: Planar Position Case: C xy Bearing Case: C φ θ y pixel,corr x pixel,corr rpixel, corr = C φ tanφ x meters xy pixel, corr y = C x θactual = θ pixel, corr = C y 1 φactual = atan ( Cφ rpixel, corr ) meters xy pixel, corr 3.27 Scale Factor Solution r pixel,corr z φ z = z nom + Δ grid rpixel corr = C = Cφ z nom, φ tan ( φactual ) r r grid + Δ Planar position: automatically have a scale factor C xy = 1 (in grid units). Bearing angle: in order to compute scale factor, C φ, need distance of pinhole from calibration plane, z. This distance is approximately equal to the distance from the calibration plane to the front of the camera, z nom (in grid units)
15 LSQ for Bearing Scale Factor 1. Compute bearing scale-factor, C φ, assuming the camera-to-calibration plane measurement is biased by an unknown constant, Δ Polynomial fit: z r = C r Δr nom pixel, corr φ grid pixel, corr 2. Solve for unknown coefficients with least squares method 3. Confirm that residual, δ, and distance correction, Δ, are both small Least Squares: rgrid,1 rpixel, corr,1 rgrid,2 r pixel, corr,2 A =, rgrid, N r c = A\ b pixel, corr, N znomrpixel, corr,1 znomr pixel, corr,2 Cφ b =, c = Δ znomrpixel, corr, N Residual: δ = Ac b 3.29 Viewing Cone For a conventional camera, the maximum viewing angle is set by the size of the pixel array and by its distance from the focal center (which is a function of lens shape). In the figure below, this maximum viewing angle relative to the optical axis is labeled φ max φ max Often, specs refer to a camera field of view, where FOV = 2 φ max The FOV places a significant constraint on vision-based control
16 Increasing Field of View To maximize the field of view, it may be desirable to use A wide angle lens A panoramic mirror These methods increase field of view and Decrease resolution Add angular distortion But permit easier tracking of targets and features in the environmental from Visual Positioning Signal Vision Sensor Video Hardware Acquire Process Obtain Camera State Position Signal
17 Applying Vision Signal Example: Gravity Probe B Measures position relative to stars Coarse positioning: Star tracker Fine positioning: Guide star + Telescope 3.33 Optical Sensing Hardware Example: Gravity Probe B Measures position relative to stars Coarse positioning: Star tracker FOV: 1 degree Resolution: 1 arcminute Fine positioning: Telescope FOV: 1 arcminute Resolution: 1 milliarcminute
18 Star Tracker Star Tracker View Guide Star Two pixel measurement Compares intensity on two pixels Control attempts to balance intensity on both pixels
19 Applying Vision Signal Example: Jellyfish Tracking 3.37 Filter Impacts Segmentation Four cases of thresholding with different filters: Raw Morphological Filter Narrow Gaussian Wide Gaussian
20 Visual Positioning Review Vision Sensor Video Hardware Acquire Process Obtain Camera State Position Signal Analog, Digital Video Video Capture Time Difference Background Diff. Planar Position, Bearing Calibration Field of View Spatial Filtering
Chapters 1 7: Overview
Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter
More informationDD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication
DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationHomogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.
Homogeneous Coordinates Overall scaling is NOT important. CSED44:Introduction to Computer Vision (207F) Lecture8: Camera Models Bohyung Han CSE, POSTECH bhhan@postech.ac.kr (",, ) ()", ), )) ) 0 It is
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationTracking Trajectories of Migrating Birds Around a Skyscraper
Tracking Trajectories of Migrating Birds Around a Skyscraper Brian Crombie Matt Zivney Project Advisors Dr. Huggins Dr. Stewart Abstract In this project, the trajectories of birds are tracked around tall
More informationIntroduction to 3D Machine Vision
Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation
More informationRobot Vision: Camera calibration
Robot Vision: Camera calibration Ass.Prof. Friedrich Fraundorfer SS 201 1 Outline Camera calibration Cameras with lenses Properties of real lenses (distortions, focal length, field-of-view) Calibration
More informationL16. Scan Matching and Image Formation
EECS568 Mobile Robotics: Methods and Principles Prof. Edwin Olson L16. Scan Matching and Image Formation Scan Matching Before After 2 Scan Matching Before After 2 Map matching has to be fast 14 robots
More informationMassachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II
Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationRectification and Disparity
Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide
More informationVision Review: Image Formation. Course web page:
Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some
More informationCOSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor
COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The
More informationGeometric camera models and calibration
Geometric camera models and calibration http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 13 Course announcements Homework 3 is out. - Due October
More informationCalibrating an Overhead Video Camera
Calibrating an Overhead Video Camera Raul Rojas Freie Universität Berlin, Takustraße 9, 495 Berlin, Germany http://www.fu-fighters.de Abstract. In this section we discuss how to calibrate an overhead video
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationCameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models
Cameras and Radiometry Last lecture in a nutshell CSE 252A Lecture 5 Conversion Euclidean -> Homogenous -> Euclidean In 2-D Euclidean -> Homogenous: (x, y) -> k (x,y,1) Homogenous -> Euclidean: (x, y,
More information3D Vision Real Objects, Real Cameras. Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun,
3D Vision Real Objects, Real Cameras Chapter 11 (parts of), 12 (parts of) Computerized Image Analysis MN2 Anders Brun, anders@cb.uu.se 3D Vision! Philisophy! Image formation " The pinhole camera " Projective
More informationStructured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe
Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based
More informationTrimble Engineering & Construction Group, 5475 Kellenburger Road, Dayton, OH , USA
Trimble VISION Ken Joyce Martin Koehler Michael Vogel Trimble Engineering and Construction Group Westminster, Colorado, USA April 2012 Trimble Engineering & Construction Group, 5475 Kellenburger Road,
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More information2012 Imaging Science Ph.D. Comprehensive Examination June 15, :00AM to 1:00PM IMPORTANT INSTRUCTIONS
2012 Imaging Science Ph.D. Comprehensive Examination June 15, 2012 9:00AM to 1:00PM IMPORTANT INSTRUCTIONS You must complete two (2) of the three (3) questions given for each of the core graduate classes.
More informationCh 22 Inspection Technologies
Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface
More informationChapter 23. Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian
Chapter 23 Geometrical Optics (lecture 1: mirrors) Dr. Armen Kocharian Reflection and Refraction at a Plane Surface The light radiate from a point object in all directions The light reflected from a plane
More informationProduct information. Hi-Tech Electronics Pte Ltd
Product information Introduction TEMA Motion is the world leading software for advanced motion analysis. Starting with digital image sequences the operator uses TEMA Motion to track objects in images,
More informationI N T R O D U C T I O N T O C O M P U T E R G R A P H I C S
3D Viewing: the Synthetic Camera Programmer s reference model for specifying 3D view projection parameters to the computer General synthetic camera (e.g., PHIGS Camera, Computer Graphics: Principles and
More informationRobotics - Projective Geometry and Camera model. Marcello Restelli
Robotics - Projective Geometr and Camera model Marcello Restelli marcello.restelli@polimi.it Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano Ma 2013 Inspired from Matteo
More informationSensor Modalities. Sensor modality: Different modalities:
Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature
More informationCSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.
University of Texas at Arlington CSE 4392/5369 Introduction to Vision Sensing Dr. Gian Luca Mariottini, Ph.D. Department of Computer Science and Engineering University of Texas at Arlington WEB : http://ranger.uta.edu/~gianluca
More informationSingle View Geometry. Camera model & Orientation + Position estimation. What am I?
Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing point Mapping from 3D to 2D Point & Line Goal: Point Homogeneous coordinates represent coordinates in 2 dimensions
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationIntroduction to Computer Vision
Introduction to Computer Vision Michael J. Black Nov 2009 Perspective projection and affine motion Goals Today Perspective projection 3D motion Wed Projects Friday Regularization and robust statistics
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationIntroduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation
Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic
More informationAvailable online at Procedia Engineering 7 (2010) Procedia Engineering 00 (2010)
Available online at www.sciencedirect.com Procedia Engineering 7 (2010) 290 296 Procedia Engineering 00 (2010) 000 000 Procedia Engineering www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia
More informationCamera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993
Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video
More informationCS201 Computer Vision Camera Geometry
CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the
More informationCS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian
CS 563 Advanced Topics in Computer Graphics Camera Models by Kevin Kardian Introduction Pinhole camera is insufficient Everything in perfect focus Less realistic Different camera models are possible Create
More informationCamera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences
Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Jean-François Lalonde, Srinivasa G. Narasimhan and Alexei A. Efros {jlalonde,srinivas,efros}@cs.cmu.edu CMU-RI-TR-8-32 July
More informationPOME A mobile camera system for accurate indoor pose
POME A mobile camera system for accurate indoor pose Paul Montgomery & Andreas Winter November 2 2016 2010. All rights reserved. 1 ICT Intelligent Construction Tools A 50-50 joint venture between Trimble
More information3D Computer Vision. Structured Light I. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light I Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationCameras and Stereo CSE 455. Linda Shapiro
Cameras and Stereo CSE 455 Linda Shapiro 1 Müller-Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html What do you know about perspective projection? Vertical lines? Other lines? 2 Image formation
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 4 Jan. 24 th, 2019 Slides from Dr. Shishir K Shah and Frank (Qingzhong) Liu Digital Image Processing COSC 6380/4393 TA - Office: PGH 231 (Update) Shikha
More informationBasilio Bona DAUIN Politecnico di Torino
ROBOTICA 03CFIOR DAUIN Politecnico di Torino Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationImproving the 3D Scan Precision of Laser Triangulation
Improving the 3D Scan Precision of Laser Triangulation The Principle of Laser Triangulation Triangulation Geometry Example Z Y X Image of Target Object Sensor Image of Laser Line 3D Laser Triangulation
More informationROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW
ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,
More informationMiniaturized Camera Systems for Microfactories
Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland
More informationMeasurements using three-dimensional product imaging
ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using
More informationMinimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.
Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationCS201 Computer Vision Lect 4 - Image Formation
CS201 Computer Vision Lect 4 - Image Formation John Magee 9 September, 2014 Slides courtesy of Diane H. Theriault Question of the Day: Why is Computer Vision hard? Something to think about from our view
More informationCamera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration
Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1
More informationLecture 9: Epipolar Geometry
Lecture 9: Epipolar Geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Why is stereo useful? Epipolar constraints Essential and fundamental matrix Estimating F (Problem Set 2
More informationAll human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.
All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. - Aristotle University of Texas at Arlington Introduction
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*
ADS40 Calibration & Verification Process Udo Tempelmann*, Ludger Hinsken**, Utz Recke* *Leica Geosystems GIS & Mapping GmbH, Switzerland **Ludger Hinsken, Author of ORIMA, Konstanz, Germany Keywords: ADS40,
More informationCamera model and multiple view geometry
Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then
More informationElectronic angle measuring machine. Angle/position by means of vignetting
OPTIK MESS- UND PRÜFTECHNIK VERTRIEB BERATUNG TRAINING ELWIMAT Electronic angle measuring machine WiPoVi Angle/position by means of vignetting Biaxial optical angle measuring with great measuring range
More informationLUMS Mine Detector Project
LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines
More informationGLOBAL-LOCAL APPROXIMATIONS
GLOBAL-LOCAL APPROXIMATIONS Puneet Singla MAE 2 Department of Mechanical & Aerospace Engineering University at Buffalo http://www.acsu.buffalo.edu/ psingla/mae2/ October 31, 27 PUNEET SINGLA 1 / 1 MOVING
More informationLight: Geometric Optics (Chapter 23)
Light: Geometric Optics (Chapter 23) Units of Chapter 23 The Ray Model of Light Reflection; Image Formed by a Plane Mirror Formation of Images by Spherical Index of Refraction Refraction: Snell s Law 1
More informationUnderstanding Variability
Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion
More information3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction 3D Shape from X shading silhouette texture stereo light striping motion mainly
More informationStereo Observation Models
Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative
More informationLight: Geometric Optics
Light: Geometric Optics 23.1 The Ray Model of Light Light very often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization,
More informationAIT Inline Computational Imaging: Geometric calibration and image rectification
AIT Inline Computational Imaging: Geometric calibration and image rectification B. Blaschitz, S. Štolc and S. Breuss AIT Austrian Institute of Technology GmbH Center for Vision, Automation & Control Vienna,
More informationIntelligent Robotics
64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter University of Hamburg Faculty of Mathematics, Informatics
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.2: Sensors Jürgen Sturm Technische Universität München Sensors IMUs (inertial measurement units) Accelerometers
More information4D Technology Corporation
4D Technology Corporation Dynamic Laser Interferometry for Company Profile Disk Shape Characterization DiskCon Asia-Pacific 2006 Chip Ragan chip.ragan@4dtechnology.com www.4dtechnology.com Interferometry
More informationPerception II: Pinhole camera and Stereo Vision
Perception II: Pinhole camera and Stereo Vision Davide Scaramuzza Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Mobile Robot Control Scheme knowledge, data base mission commands Localization
More informationProjective Geometry and Camera Models
/2/ Projective Geometry and Camera Models Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Note about HW Out before next Tues Prob: covered today, Tues Prob2: covered next Thurs Prob3:
More informationAgenda. Rotations. Camera models. Camera calibration. Homographies
Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate
More informationE x Direction of Propagation. y B y
x E x Direction of Propagation k z z y B y An electromagnetic wave is a travelling wave which has time varying electric and magnetic fields which are perpendicular to each other and the direction of propagation,
More informationPin Hole Cameras & Warp Functions
Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Motivation Taken from: http://img.gawkerassets.com/img/18w7i1umpzoa9jpg/original.jpg
More informationTECHSPEC COMPACT FIXED FOCAL LENGTH LENS
Designed for use in machine vision applications, our TECHSPEC Compact Fixed Focal Length Lenses are ideal for use in factory automation, inspection or qualification. These machine vision lenses have been
More informationCV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more
CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured
More information4. Recommended alignment procedure:
4. Recommended alignment procedure: 4.1 Introduction The described below procedure presents an example of alignment of beam shapers Shaper and Focal- Shaper (F- Shaper) with using the standard Shaper Mount
More informationPerspective Projection Describes Image Formation Berthold K.P. Horn
Perspective Projection Describes Image Formation Berthold K.P. Horn Wheel Alignment: Camber, Caster, Toe-In, SAI, Camber: angle between axle and horizontal plane. Toe: angle between projection of axle
More information3D Geometry and Camera Calibration
3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often
More informationPin Hole Cameras & Warp Functions
Pin Hole Cameras & Warp Functions Instructor - Simon Lucey 16-423 - Designing Computer Vision Apps Today Pinhole Camera. Homogenous Coordinates. Planar Warp Functions. Example of SLAM for AR Taken from:
More informationCalibration of a fish eye lens with field of view larger than 180
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Calibration of a fish eye lens with field of view larger than 18 Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek
More informationAutonomous Vehicle Navigation Using Stereoscopic Imaging
Autonomous Vehicle Navigation Using Stereoscopic Imaging Project Proposal By: Beach Wlaznik Advisors: Dr. Huggins Dr. Stewart December 7, 2006 I. Introduction The objective of the Autonomous Vehicle Navigation
More informationCamera calibration for miniature, low-cost, wide-angle imaging systems
Camera calibration for miniature, low-cost, wide-angle imaging systems Oliver Frank, Roman Katz, Christel-Loic Tisse and Hugh Durrant-Whyte ARC Centre of Excellence for Autonomous Systems University of
More informationA Stereo Machine Vision System for. displacements when it is subjected to elasticplastic
A Stereo Machine Vision System for measuring three-dimensional crack-tip displacements when it is subjected to elasticplastic deformation Arash Karpour Supervisor: Associate Professor K.Zarrabi Co-Supervisor:
More informationPart Images Formed by Flat Mirrors. This Chapter. Phys. 281B Geometric Optics. Chapter 2 : Image Formation. Chapter 2: Image Formation
Phys. 281B Geometric Optics This Chapter 3 Physics Department Yarmouk University 21163 Irbid Jordan 1- Images Formed by Flat Mirrors 2- Images Formed by Spherical Mirrors 3- Images Formed by Refraction
More informationECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD
ECE-161C Cameras Nuno Vasconcelos ECE Department, UCSD Image formation all image understanding starts with understanding of image formation: projection of a scene from 3D world into image on 2D plane 2
More informationComputer Vision CS 776 Fall 2018
Computer Vision CS 776 Fall 2018 Cameras & Photogrammetry 1 Prof. Alex Berg (Slide credits to many folks on individual slides) Cameras & Photogrammetry 1 Albrecht Dürer early 1500s Brunelleschi, early
More informationDistribution Ray-Tracing. Programação 3D Simulação e Jogos
Distribution Ray-Tracing Programação 3D Simulação e Jogos Bibliography K. Suffern; Ray Tracing from the Ground Up, http://www.raytracegroundup.com Chapter 4, 5 for Anti-Aliasing Chapter 6 for Disc Sampling
More informationCIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM
CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and
More informationARCHITECTURE & GAMES. A is for Architect Simple Mass Modeling FORM & SPACE. Industry Careers Framework. Applied. Getting Started.
A is for Architect Simple Mass Modeling One of the first introductions to form and space usually comes at a very early age. As an infant, you might have played with building blocks to help hone your motor
More informationImage Transformations & Camera Calibration. Mašinska vizija, 2018.
Image Transformations & Camera Calibration Mašinska vizija, 2018. Image transformations What ve we learnt so far? Example 1 resize and rotate Open warp_affine_template.cpp Perform simple resize
More informationGEOG 4110/5100 Advanced Remote Sensing Lecture 4
GEOG 4110/5100 Advanced Remote Sensing Lecture 4 Geometric Distortion Relevant Reading: Richards, Sections 2.11-2.17 Review What factors influence radiometric distortion? What is striping in an image?
More informationMarcel Worring Intelligent Sensory Information Systems
Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video
More informationA 3-D Scanner Capturing Range and Color for the Robotics Applications
J.Haverinen & J.Röning, A 3-D Scanner Capturing Range and Color for the Robotics Applications, 24th Workshop of the AAPR - Applications of 3D-Imaging and Graph-based Modeling, May 25-26, Villach, Carinthia,
More informationRepresenting the World
Table of Contents Representing the World...1 Sensory Transducers...1 The Lateral Geniculate Nucleus (LGN)... 2 Areas V1 to V5 the Visual Cortex... 2 Computer Vision... 3 Intensity Images... 3 Image Focusing...
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationRigid Body Motion and Image Formation. Jana Kosecka, CS 482
Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3
More information