Augmented Reality, Advanced SLAM, Applications

Size: px
Start display at page:

Download "Augmented Reality, Advanced SLAM, Applications"

Transcription

1 Augmented Reality, Advanced SLAM, Applications Prof. Didier Stricker & Dr. Alain Pagani Lecture 3D Computer Vision AR, SLAM, Applications 1

2 Introduction Previous lectures: Basics (camera, projective geometry) Structure From Motion Structured Light Dense 3D Reconstruction Depth Cameras Today: Insights into SLAM techniques Augmented Reality Applications of 3D Computer Vision Lecture 3D Computer Vision AR, SLAM, Applications 2

3 Introduction Previous lectures: Basics (camera, projective geometry) Structure From Motion Structured Light Dense 3D Reconstruction Depth Cameras Today: Insights into Advanced SLAM techniques Augmented Reality Applications of 3D Computer Vision Lecture 3D Computer Vision AR, SLAM, Applications 3

4 Recall: structure and motion (SAM) Unknown camera viewpoints Reconstruct Sparse scene geometry Camera motion Lecture 3D Computer Vision AR, SLAM, Applications 4

5 Offline vs. online structure and motion Offline: Online: E.g. as basis for dense 3D model reconstruction No real-time requirements, all images are available at once E.g. for mobile Augmented Reality in unknown environments Real-time requirements, images become available one by one, output required at each time-step Lecture 3D Computer Vision AR, SLAM, Applications 5

6 Online structure and motion (calibrated case) Reminder - Lecture 7 Iterative SFM Alternating estimation of camera poses and 3D feature locations (triangulation) from a (continuous) image sequence. Compute pose of first 2 cameras Relative Pose Problem 8 Point Algorithm 2D feature location (from image processing) t = 1 Camera pose t = 2 2D Matches Lecture 3D Computer Vision AR, SLAM, Applications 6

7 Online structure and motion (calibrated case) Reminder - Lecture 7 Iterative SFM Alternating estimation of camera poses and 3D feature locations (triangulation) from a (continuous) image sequence. Triangulate 3D points 3D feature location 2D feature location (from image processing) Camera pose t = 1 t = Lecture 3D Computer Vision AR, SLAM, Applications 7

8 Online structure and motion (calibrated case) Reminder - Lecture 7 Iterative SFM Alternating estimation of camera poses and 3D feature locations (triangulation) from a (continuous) image sequence. 3D feature location 2D feature location (from image processing) t = 1 t = 2 t = 3 Camera pose 2D Matches Lecture 3D Computer Vision AR, SLAM, Applications 8

9 Online structure and motion (calibrated case) Reminder - Lecture 7 Iterative SFM Alternating estimation of camera poses and 3D feature locations (triangulation) from a (continuous) image sequence. Estimate next camera pose (now from 2D/3D correspondences) 3D feature location Pose Problem PnP 2D feature location (from image processing) t = 1 t = 2 t = 3 Camera pose Lecture 3D Computer Vision AR, SLAM, Applications 9

10 Online structure and motion (calibrated case) Reminder - Lecture 7 Iterative SFM Alternating estimation of camera poses and 3D feature locations (triangulation) from a (continuous) image sequence. 3D feature location Triangulate additional 3D points 2D feature location (from image processing) t = 1 t = 2 t = 3 Camera pose Lecture 3D Computer Vision AR, SLAM, Applications 10

11 Online structure and motion (calibrated case) Reminder - Lecture 7 Iterative SFM Alternating estimation of camera poses and 3D feature locations (triangulation) from a (continuous) image sequence. 3D feature location Refine known 3D points with new camera poses 2D feature location (from image processing) t = 1 t = 2 t = 3 Camera pose Lecture 3D Computer Vision AR, SLAM, Applications 11

12 Online structure and motion (calibrated case) Reminder - Lecture 7 Iterative SFM Alternating estimation of camera poses and 3D feature locations (triangulation) from a (continuous) image sequence. 3D feature location 2D feature location (from image processing) t = 1 t = 2 t = 3 Camera pose Refine known cameras with new 3D points Lecture 3D Computer Vision AR, SLAM, Applications 12

13 Global Bundle Adjustment Global bundle adjustment: jointly optimize over all camera poses and 3D points (previous lecture) x = arg min k t=1 n l=1 r (i) Estimate Minimize over parameter vector containing all camera poses and 3D points 6 parameters for each camera + 3 for each 3D point 6k + 3l parameters must be estimated matrices are sparse! Residual/reprojection error Nonlinear estimation problem: use e.g. Levenberg-Marquard, start at the linear solution Open source libraries available, e.g. Sparse Bundle Adjustment (SBA) Lecture 3D Computer Vision AR, SLAM, Applications 13

14 Drift reduction using uncertainties Incorporate uncertainties, e.g. simple stochastic model and WLS estimation All entities modelled as Gaussian random variables Θ Lecture 3D Computer Vision AR, SLAM, Applications 14

15 3D point refinement Incorporate new camera view, each time the feature is observed in an image Methods: Repeated triangulation Recursive filtering (e.g. extended Kalman filter) Treated in lecture Computer Vision: Object and People Tracking Filter-based SLAM Lecture 3D Computer Vision AR, SLAM, Applications 15

16 SLAM with a Bayesian filter Continous image stream (image sequence) Triangulation difficult (short baseline) Matching of keypoint can drift over time SfM-based SLAM is not adapted Introduction of filtering techniques SLAM first used in robotics, with simpler sensors (ex: LIDAR) Lecture 3D Computer Vision AR, SLAM, Applications 16

17 Bayesian Tracking: the components State x t : camera position Measurement z t : image-based measurements Control input u t - in visual tracking: no control input Treated in lecture Computer Vision: Object and People Tracking State is hidden, only measurement is observed Markovian assumptions MA1: State x t depends only on previous state x t 1 MA2: Measurement z t depends only on state x t Lecture 3D Computer Vision AR, SLAM, Applications 17

18 Bayesian Tracking: derivation How to express p x t z 1:t when knowing p x t 1 z 1:t 1? p x t z 1:t = p x t z t, z 1:t 1 Treated in lecture Computer Vision: Object and People Tracking = p z t x t,z 1:t 1 p x t z 1:t 1 p z t z 1:t 1 = p z t x t p x t z 1:t 1 p(z t ) Bayes theorem Markovian assumption 1 = p z t x t p x t x t 1 p x t 1 z 1:t 1 dx t 1 p(z t ) Marginalisation (Chapman-Kolmogorov) p x t z 1:t = η p z t x t Measurement model p x t x t 1 p x t 1 z 1:t 1 dx t 1 Motion model Lecture 3D Computer Vision AR, SLAM, Applications 18

19 Bayesian Tracking: generic equation, components p x t z 1:t = η p z t x t Measurement model p x t x t 1 p x t 1 z 1:t 1 dx t 1 Motion model correct measure predict Solutions in the general case: Kalman Filter if the model is linear Gaussian Extended Kalman Filter if the model is non linear (linearization by Taylor expansion) Particle Filter in the general case In vision-based tracking, the models are not linear! Lecture 3D Computer Vision AR, SLAM, Applications 19

20 Filter-based SLAM The map (environment) has to be added in the equations: Probability of interest Motion model Measurement model Lecture 3D Computer Vision AR, SLAM, Applications 20

21 Filter-based SLAM Lecture 3D Computer Vision AR, SLAM, Applications 21

22 MonoSLAM (EKF-SLAM) MonoSLAM 1) EKF-based Filter initialization Map management ( Generate & delete features ) Prediction Prediction Measurements acquisition Measurements Acquisition Data association Update Update 1) A. J. Davison, I. D. Reid, N. D. Molton, O. Stasse, MonoSLAM: Real-Time Single Camera SLAM, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, June Lecture 3D Computer Vision AR, SLAM, Applications 22 22

23 MonoSLAM (EKF-SLAM) MonoSLAM Prediction Measurements Acquisition Update State x( t 1) x ( t 1) y 1 y2 v Prediction ˆ () q xv t W W r ( t 1) v ( t 1) t R ( t 1) q( ω ( t 1) t) W v ( t 1) R ω ( t 1) WR Dynamic System Model (Constant Velocity Model) W r () t : 3D position vector WR () () q t : orientation quaternion xv t W v () t : linear velocity vector R ω () t : angular velocity vector : landmark position vector y i Lecture 3D Computer Vision AR, SLAM, Applications 23 23

24 MonoSLAM (EKF-SLAM) MonoSLAM Prediction Measurements Acquisition Update Active search 1),2) Prediction of measurements u h ( ) ( ˆ i t hi xv( t)) v r K ˆ W () t qˆ WR () t ku y () i kv P t 1 k Find measurements T 1 For u ( uv, ) u Si u Matching the patch by NCC at h () t u i Max NCC value at Measurement h () t u i > threshold z ( t) h ( t) u i i S i : a covariance matrix for the 2D position of i th landmark 1) A. J. Davison, Active Search for Real-Time Vision, International Conference Computer Vision, ) M. Chli, A. J. Davison, Active Matching for Visual Tracking, Robotics and Autonomous Systems, 57(12): , Lecture 3D Computer Vision AR, SLAM, Applications 24 24

25 MonoSLAM (EKF-SLAM) MonoSLAM Prediction Measurements Acquisition Update Update z1( t) h1( t) x( t) xˆ ( t) K( t) zn( t) hn( t) K() t : a Kalman gain at time t 1) A. J. Davison, Active Search for Real-Time Vision, International Conference Computer Vision, ) M. Chli, A. J. Davison, Active Matching for Visual Tracking, Robotics and Autonomous Systems, 57(12): , Lecture 3D Computer Vision AR, SLAM, Applications 25 25

26 MonoSLAM (EKF-SLAM) MonoSLAM Initialization of features Delayed : SfM Undelayed : Inverse depth parameterization 1) Experiment 1.6GHz Pentium M processor 1) J. Civera, A. J. Davison, J. M. M. Montieal, Inverse Depth Parametrization for Monocular SLAM, IEEE Transactions on Robotics 24(5): , Lecture 3D Computer Vision AR, SLAM, Applications 26 26

27 Comparison SfM-based Filter-based Initialization Measurement 8-point algorithm NCC matching (from extracted feature points) KLT tracker Delayed : SfM Undelayed : Inverse depth parameterization Active search (prediction & template matching) KLT tracker Estimation technique SBA (after p3p algorithm) Kalman filtering (prediction & update) Tracking 3~400 points in a frame Working in real time within 100 landmarks Lecture 3D Computer Vision AR, SLAM, Applications 27 27

28 Demonstration Lecture 3D Computer Vision AR, SLAM, Applications 28

29 PTAM: Klein and Murray, ISMAR 2007 Title: Parallel tracking and mapping for small AR workspaces Known as PTAM system MANY features, (simple) correlation based tracking Parallel pose tracking and 3D reconstruction threads Local bundle adjustment (based on keyframes) Code, videos, papers, slides available here Lecture 3D Computer Vision AR, SLAM, Applications 29

30 Why is SLAM fundamentally harder? Frame by Frame SLAM Time One frame Find features Update camera pose and entire map Many DOF Draw graphics Lecture 3D Computer Vision AR, SLAM, Applications 30

31 Frame by frame SLAM Standard SLAM Updating entire map every frame is expensive Needs sparse map of high-quality features (A. Davison) Proposed approach Use dense map (of low quality features) Don t update the map every frame : Keyframes Split the tracking and mapping into two threads Lecture 3D Computer Vision AR, SLAM, Applications 31

32 Parallel Tracking And Mapping Proposed method - Split the tracking and mapping into two threads Time Thread #2 Mapping Update map One frame Thread #1 Tracking Find features Update camera pose only Draw graphics Lecture 3D Computer Vision AR, SLAM, Applications 32

33 Parallel Tracking and Mapping Tracking thread: Responsible estimation of camera pose and rendering augmented graphics Must run at 30 Hz Make as robust and accurate as possible Mapping thread: Responsible for providing the map Can take long time per key frame Make as rich and accurate as possible Lecture 3D Computer Vision AR, SLAM, Applications 33

34 Tracking thread Overall flow Pre-process frame Map Project points Project points Measure points Measure points Update Camera Pose Coarse stage Update Camera Pose Fine stage Draw Graphics Lecture 3D Computer Vision AR, SLAM, Applications 34

35 Pre-process frame Mono and RGB version of image 4 pyramid levels Detect FAST corners (E. Rosten et al ECC 2006) 640x x x120 80x Lecture 3D Computer Vision AR, SLAM, Applications 35

36 Pre-process frame Make for pyramid levels Detect Fast corners E. Rosten et al (ECCV 2006) 640x x x120 80x Lecture 3D Computer Vision AR, SLAM, Applications 36

37 Project Points Use motion model to update camera pose Constant velocity model Estimated current Pt+1 Previous pos Pt Previous pos Pt-1 t t Vt =(Pt Pt-1)/ t Pt+1=Pt+ t (Vt) Lecture 3D Computer Vision AR, SLAM, Applications 37

38 Project Points Choose subset to measure ~ 50 features for coarse stage 1000 randomly selected for fine stage 1000 ~50 640x x x120 80x Lecture 3D Computer Vision AR, SLAM, Applications 38

39 Measure Points Generate 8x8 matching template (warped from source keyframe:map) Search a fixed radius around projected position Use Zero-mean SSD Only search at Fast corner points Lecture 3D Computer Vision AR, SLAM, Applications 39

40 Update camera pose 6-DOF problem Obtain by SFM (Three-point algorithm) Lecture 3D Computer Vision AR, SLAM, Applications 40

41 Mapping thread Overall flow Stereo Initialization Wait for new key frame Add new map points Tracker Optimize map Map maintenance Lecture 3D Computer Vision AR, SLAM, Applications 41

42 Stereo Initialization Use five-point-pose algorithm D. Nister et. al Requires a pair of frames and feature correspondences Provides initial map User input required: Two clicks for two key-frames Smooth motion for feature correspondence Lecture 3D Computer Vision AR, SLAM, Applications 42

43 Wait for new key frame Key frames are only added if : There is a sufficient baseline to the other key frame Tracking quality is good When a key frame is added : The mapping thread stops whatever it is doing All points in the map are measured in the keyframe New map points are found and added to the map Lecture 3D Computer Vision AR, SLAM, Applications 43

44 Add new map points Aim: as many map points as possible Check all maximal FAST corners in the key frame : Check score Check if already in map Epipolar search in a neighboring key frame Triangulate matches and add to map Repeat in four image pyramid levels Lecture 3D Computer Vision AR, SLAM, Applications 44

45 Optimize map Use batch SFM method: Bundle Adjustment Adjusts map point positions and key frame poses Minimize reprojection error of all points in all keyframes (or use only last N key frames) Lecture 3D Computer Vision AR, SLAM, Applications 45

46 System and Results Environment Desktop PC (Intel Core 2 Duo 2.66 GHz) OS : Linux Language : C++ Tracking speed Total Key frame preparation Feature Projection Patch search Iterative pose update 19.2 ms 2.2 ms 3.5 ms 9.8 ms 3.7 ms Lecture 3D Computer Vision AR, SLAM, Applications 46

47 System and Results Mapping scalability and speed Practical limit 150 key frames 6000 points Bundle adjustment timing Key frames Local Bundle Adjustment 170 ms 270 ms 440 ms Global Bundle Adjustment 380 ms 1.7 s 6.9 s Lecture 3D Computer Vision AR, SLAM, Applications 47

48 Draw graphics Distorted rendering Plane estimation Lecture 3D Computer Vision AR, SLAM, Applications 48

49 Draw graphics What can we draw in an unknown scene? Assume single plane visible at start Run VR simulation on the plane Lecture 3D Computer Vision AR, SLAM, Applications 49

50 Draw graphics What can we draw in an unknown scene? Assume single plane visible at start Run VR simulation on the plane Lecture 3D Computer Vision AR, SLAM, Applications 50

51 Draw graphics What can we draw in an unknown scene? Assume single plane visible at start Run VR simulation on the plane Lecture 3D Computer Vision AR, SLAM, Applications 51

52 Draw graphics What can we draw in an unknown scene? Assume single plane visible at start Run VR simulation on the plane Lecture 3D Computer Vision AR, SLAM, Applications 52

53 Demonstration Lecture 3D Computer Vision AR, SLAM, Applications 53

54 Loop closing in SLAM Recognize previously visited location Update the beliefs accordingly Different solutions exists Bag of SIFT features Keyframe recognition Suppl. sensors Lecture 3D Computer Vision AR, SLAM, Applications 54

55 Introduction Previous lectures: Basics (camera, projective geometry) Structure From Motion Structured Light Dense 3D Reconstruction Depth Cameras Today: Insights into Advanced SLAM techniques Augmented Reality Applications of 3D Computer Vision Lecture 3D Computer Vision AR, SLAM, Applications 55

56 Augmented Reality AR is mostly based on 3D Computer Vision Camera calibration required First attempts with visual markers (based on homographies) Visual features (keypoints) (PnP problem) SLAM approaches Lecture 3D Computer Vision AR, SLAM, Applications 56

57 Calibration Matrix K 57

58 Augmented Reality Interface with rendering K Matrix Projection Matrix (e.g. opengl) R, t (pose) Modelview Matrix (e.g. opengl) Distortion parameters have to be estimated Undistort image, or Distorted rendering Visual coherence Lecture 3D Computer Vision AR, SLAM, Applications 58

59 Visual coherence Realistic integration between virtual and real Lecture 3D Computer Vision AR, SLAM, Applications 59

60 Visual coherence Requires to estimate the lighting conditions Light probe Direct estimation of camera artifacts (blur, colors) advanced 3D CV Lecture 3D Computer Vision AR, SLAM, Applications 60

61 Introduction Previous lectures: Basics (camera, projective geometry) Structure From Motion Structured Light Dense 3D Reconstruction Depth Cameras Today: Insights into Advanced SLAM techniques Augmented Reality Applications of 3D Computer Vision Lecture 3D Computer Vision AR, SLAM, Applications 61

62 3DCV: 3D reconstruction and printing Lecture 3D Computer Vision AR, SLAM, Applications 62

63 Lecture 3D Computer Vision AR, SLAM, Applications 63 Diffused Texture Appearance modeling Reference picture

64 Lecture 3D Computer Vision AR, SLAM, Applications 64

65 Measurements, planning Lecture 3D Computer Vision AR, SLAM, Applications Copyright 2013 Augmented Vision - DFKI 65 65

66 Person reconstruction for clothes industry Lecture 3D Computer Vision AR, SLAM, Applications 66

67 Gesture and HCI Lecture 3D Computer Vision AR, SLAM, Applications 67

68 We are hiring! Projects / Seminars Bachelor and Master theses Hiwi positions 3D computer vision Reconstruction 2D computer vision Lecture 3D Computer Vision AR, SLAM, Applications 68

69 : Questions + Exercises Next appointment Thanks! Lecture 3D Computer Vision AR, SLAM, Applications 69

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: EKF-based SLAM Dr. Kostas Alexis (CSE) These slides have partially relied on the course of C. Stachniss, Robot Mapping - WS 2013/14 Autonomous Robot Challenges Where

More information

Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM

Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich. LSD-SLAM: Large-Scale Direct Monocular SLAM Computer Vision Group Technical University of Munich Jakob Engel LSD-SLAM: Large-Scale Direct Monocular SLAM Jakob Engel, Thomas Schöps, Daniel Cremers Technical University Munich Monocular Video Engel,

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

Direct Methods in Visual Odometry

Direct Methods in Visual Odometry Direct Methods in Visual Odometry July 24, 2017 Direct Methods in Visual Odometry July 24, 2017 1 / 47 Motivation for using Visual Odometry Wheel odometry is affected by wheel slip More accurate compared

More information

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

CS 532: 3D Computer Vision 7 th Set of Notes

CS 532: 3D Computer Vision 7 th Set of Notes 1 CS 532: 3D Computer Vision 7 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Logistics No class on October

More information

Monocular Visual Odometry

Monocular Visual Odometry Elective in Robotics coordinator: Prof. Giuseppe Oriolo Monocular Visual Odometry (slides prepared by Luca Ricci) Monocular vs. Stereo: eamples from Nature Predator Predators eyes face forward. The field

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

CSE 527: Introduction to Computer Vision

CSE 527: Introduction to Computer Vision CSE 527: Introduction to Computer Vision Week 10 Class 2: Visual Odometry November 2nd, 2017 Today Visual Odometry Intro Algorithm SLAM Visual Odometry Input Output Images, Video Camera trajectory, motion

More information

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Live Metric 3D Reconstruction on Mobile Phones ICCV 2013 Main Contents 1. Target & Related Work 2. Main Features of This System 3. System Overview & Workflow 4. Detail of This System 5. Experiments 6.

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing

More information

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10 Structure from Motion CSE 152 Lecture 10 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 8: Structure from Motion Optional: Multiple View Geometry in Computer Vision, 2nd edition, Hartley

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robot Localization based on Geo-referenced Images and G raphic Methods Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,

More information

Hybrids Mixed Approaches

Hybrids Mixed Approaches Hybrids Mixed Approaches Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline Why mixing? Parallel Tracking and Mapping Benefits

More information

User Interface Engineering HS 2013

User Interface Engineering HS 2013 User Interface Engineering HS 2013 Augmented Reality Part I Introduction, Definitions, Application Areas ETH Zürich Departement Computer Science User Interface Engineering HS 2013 Prof. Dr. Otmar Hilliges

More information

Camera Drones Lecture 3 3D data generation

Camera Drones Lecture 3 3D data generation Camera Drones Lecture 3 3D data generation Ass.Prof. Friedrich Fraundorfer WS 2017 Outline SfM introduction SfM concept Feature matching Camera pose estimation Bundle adjustment Dense matching Data products

More information

Multiview Stereo COSC450. Lecture 8

Multiview Stereo COSC450. Lecture 8 Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect

More information

On the Use of Inverse Scaling in Monocular SLAM

On the Use of Inverse Scaling in Monocular SLAM On the Use of Inverse Scaling in Monocular SLAM Daniele Marzorati 1, Matteo Matteucci 2, Davide Migliore 2, Domenico G. Sorrenti 1 1 Università degli Studi di Milano - Bicocca 2 Politecnico di Milano SLAM

More information

A Systems View of Large- Scale 3D Reconstruction

A Systems View of Large- Scale 3D Reconstruction Lecture 23: A Systems View of Large- Scale 3D Reconstruction Visual Computing Systems Goals and motivation Construct a detailed 3D model of the world from unstructured photographs (e.g., Flickr, Facebook)

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

3D Scene Reconstruction with a Mobile Camera

3D Scene Reconstruction with a Mobile Camera 3D Scene Reconstruction with a Mobile Camera 1 Introduction Robert Carrera and Rohan Khanna Stanford University: CS 231A Autonomous supernumerary arms, or "third arms", while still unconventional, hold

More information

Monocular SLAM with Inverse Scaling Parametrization

Monocular SLAM with Inverse Scaling Parametrization Monocular SLAM with Inverse Scaling Parametrization D. Marzorati 2, M. Matteucci 1, D. Migliore 1, and D. G. Sorrenti 2 1 Dept. Electronics and Information, Politecnico di Milano 2 Dept. Informatica, Sistem.

More information

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F

Fundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental

More information

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers

More information

Visual SLAM for small Unmanned Aerial Vehicles

Visual SLAM for small Unmanned Aerial Vehicles Visual SLAM for small Unmanned Aerial Vehicles Margarita Chli Autonomous Systems Lab, ETH Zurich Simultaneous Localization And Mapping How can a body navigate in a previously unknown environment while

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza

Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time, ICRA 14, by Pizzoli, Forster, Scaramuzza [M. Pizzoli, C. Forster,

More information

Lecture 10 Dense 3D Reconstruction

Lecture 10 Dense 3D Reconstruction Institute of Informatics Institute of Neuroinformatics Lecture 10 Dense 3D Reconstruction Davide Scaramuzza 1 REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time M. Pizzoli, C. Forster,

More information

/10/$ IEEE 4048

/10/$ IEEE 4048 21 IEEE International onference on Robotics and Automation Anchorage onvention District May 3-8, 21, Anchorage, Alaska, USA 978-1-4244-54-4/1/$26. 21 IEEE 448 Fig. 2: Example keyframes of the teabox object.

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Augmenting Reality, Naturally:

Augmenting Reality, Naturally: Augmenting Reality, Naturally: Scene Modelling, Recognition and Tracking with Invariant Image Features by Iryna Gordon in collaboration with David G. Lowe Laboratory for Computational Intelligence Department

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.1: 2D Motion Estimation in Images Jürgen Sturm Technische Universität München 3D to 2D Perspective Projections

More information

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza ETH Master Course: 151-0854-00L Autonomous Mobile Robots Summary 2 Lecture Overview Mobile Robot Control Scheme knowledge, data base mission

More information

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented

More information

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM) Robotics Lecture 7: Simultaneous Localisation and Mapping (SLAM) See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College

More information

Visual-Inertial RGB-D SLAM for Mobile Augmented Reality

Visual-Inertial RGB-D SLAM for Mobile Augmented Reality Visual-Inertial RGB-D SLAM for Mobile Augmented Reality Williem 1, Andre Ivan 1, Hochang Seok 2, Jongwoo Lim 2, Kuk-Jin Yoon 3, Ikhwan Cho 4, and In Kyu Park 1 1 Department of Information and Communication

More information

Geometric Reconstruction Dense reconstruction of scene geometry

Geometric Reconstruction Dense reconstruction of scene geometry Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual

More information

Depth from two cameras: stereopsis

Depth from two cameras: stereopsis Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture

More information

Large-Scale Robotic SLAM through Visual Mapping

Large-Scale Robotic SLAM through Visual Mapping Large-Scale Robotic SLAM through Visual Mapping Christof Hoppe, Kathrin Pirker, Matthias Ru ther and Horst Bischof Institute for Computer Graphics and Vision Graz University of Technology, Austria {hoppe,

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion 007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

Structure from motion

Structure from motion Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Stable Vision-Aided Navigation for Large-Area Augmented Reality Stable Vision-Aided Navigation for Large-Area Augmented Reality Taragay Oskiper, Han-Pang Chiu, Zhiwei Zhu Supun Samarasekera, Rakesh Teddy Kumar Vision and Robotics Laboratory SRI-International Sarnoff,

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Monocular SLAM for a Small-Size Humanoid Robot

Monocular SLAM for a Small-Size Humanoid Robot Tamkang Journal of Science and Engineering, Vol. 14, No. 2, pp. 123 129 (2011) 123 Monocular SLAM for a Small-Size Humanoid Robot Yin-Tien Wang*, Duen-Yan Hung and Sheng-Hsien Cheng Department of Mechanical

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs

FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs FLaME: Fast Lightweight Mesh Estimation using Variational Smoothing on Delaunay Graphs W. Nicholas Greene Robust Robotics Group, MIT CSAIL LPM Workshop IROS 2017 September 28, 2017 with Nicholas Roy 1

More information

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018 CS 395T Lecture 12: Feature Matching and Bundle Adjustment Qixing Huang October 10 st 2018 Lecture Overview Dense Feature Correspondences Bundle Adjustment in Structure-from-Motion Image Matching Algorithm

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Visual Pose Estimation System for Autonomous Rendezvous of Spacecraft

Visual Pose Estimation System for Autonomous Rendezvous of Spacecraft Visual Pose Estimation System for Autonomous Rendezvous of Spacecraft Mark A. Post1, Junquan Li2, and Craig Clark2 Space Mechatronic Systems Technology Laboratory Dept. of Design, Manufacture & Engineering

More information

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava 3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Online Learning of Binary Feature Indexing for Real-time SLAM Relocalization

Online Learning of Binary Feature Indexing for Real-time SLAM Relocalization Online Learning of Binary Feature Indexing for Real-time SLAM Relocalization Youji Feng 1, Yihong Wu 1, Lixin Fan 2 1 Institute of Automation, Chinese Academy of Sciences 2 Nokia Research Center, Tampere

More information

Towards Monocular On-Line 3D Reconstruction

Towards Monocular On-Line 3D Reconstruction Towards Monocular On-Line 3D Reconstruction Pekka Paalanen, Ville Kyrki, and Joni-Kristian Kamarainen Machine Vision and Pattern Recognition Research Group Lappeenranta University of Technology, Finland

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

Epipolar Geometry and Stereo Vision

Epipolar Geometry and Stereo Vision Epipolar Geometry and Stereo Vision Computer Vision Jia-Bin Huang, Virginia Tech Many slides from S. Seitz and D. Hoiem Last class: Image Stitching Two images with rotation/zoom but no translation. X x

More information

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor Takafumi Taketomi, Tomokazu Sato, and Naokazu Yokoya Graduate School of Information

More information

Real-Time Vision-Based State Estimation and (Dense) Mapping

Real-Time Vision-Based State Estimation and (Dense) Mapping Real-Time Vision-Based State Estimation and (Dense) Mapping Stefan Leutenegger IROS 2016 Workshop on State Estimation and Terrain Perception for All Terrain Mobile Robots The Perception-Action Cycle in

More information

WANGSIRIPITAK, MURRAY: REASONING ABOUT VISIBILITY AND OCCLUSION 1 Reducing mismatching under time-pressure by reasoning about visibility and occlusion

WANGSIRIPITAK, MURRAY: REASONING ABOUT VISIBILITY AND OCCLUSION 1 Reducing mismatching under time-pressure by reasoning about visibility and occlusion Reducing mismatching under time-pressure by reasoning about visibility and occlusion S Wangsiripitak D W Murray Department of Engineering Science University of Oxford Parks Road, Oxford, OX 3PJ www.robots.ox.ac.uk/activevision

More information

Visual Tracking (1) Pixel-intensity-based methods

Visual Tracking (1) Pixel-intensity-based methods Intelligent Control Systems Visual Tracking (1) Pixel-intensity-based methods Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Real-Time Model-Based SLAM Using Line Segments

Real-Time Model-Based SLAM Using Line Segments Real-Time Model-Based SLAM Using Line Segments Andrew P. Gee and Walterio Mayol-Cuevas Department of Computer Science, University of Bristol, UK {gee,mayol}@cs.bris.ac.uk Abstract. Existing monocular vision-based

More information

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47 Flow Estimation Min Bai University of Toronto February 8, 2016 Min Bai (UofT) Flow Estimation February 8, 2016 1 / 47 Outline Optical Flow - Continued Min Bai (UofT) Flow Estimation February 8, 2016 2

More information

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching --

Computational Optical Imaging - Optique Numerique. -- Single and Multiple View Geometry, Stereo matching -- Computational Optical Imaging - Optique Numerique -- Single and Multiple View Geometry, Stereo matching -- Autumn 2015 Ivo Ihrke with slides by Thorsten Thormaehlen Reminder: Feature Detection and Matching

More information

Fast and Stable Tracking for AR fusing Video and Inertial Sensor Data

Fast and Stable Tracking for AR fusing Video and Inertial Sensor Data Fast and Stable Tracking for AR fusing Video and Inertial Sensor Data Gabriele Bleser, Cedric Wohlleber, Mario Becker, Didier Stricker Fraunhofer IGD Fraunhoferstraße 5 64283 Darmstadt, Germany {gbleser,

More information

COMPUTER VISION Multi-view Geometry

COMPUTER VISION Multi-view Geometry COMPUTER VISION Multi-view Geometry Emanuel Aldea http://hebergement.u-psud.fr/emi/ Computer Science and Multimedia Master - University of Pavia Triangulation - the building block

More information

ORB SLAM 2 : an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

ORB SLAM 2 : an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras ORB SLAM 2 : an OpenSource SLAM System for Monocular, Stereo and RGBD Cameras Raul urartal and Juan D. Tardos Presented by: Xiaoyu Zhou Bolun Zhang Akshaya Purohit Lenord Melvix 1 Outline Background Introduction

More information

Visual SLAM. An Overview. L. Freda. ALCOR Lab DIAG University of Rome La Sapienza. May 3, 2016

Visual SLAM. An Overview. L. Freda. ALCOR Lab DIAG University of Rome La Sapienza. May 3, 2016 An Overview L. Freda ALCOR Lab DIAG University of Rome La Sapienza May 3, 2016 L. Freda (University of Rome La Sapienza ) Visual SLAM May 3, 2016 1 / 39 Outline 1 Introduction What is SLAM Motivations

More information

Depth from two cameras: stereopsis

Depth from two cameras: stereopsis Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture

More information

Real-Time Monocular SLAM with Straight Lines

Real-Time Monocular SLAM with Straight Lines Real-Time Monocular SLAM with Straight Lines Paul Smith, Ian Reid and Andrew Davison Department of Engineering Science, University of Oxford, UK Department of Computing, Imperial College London, UK [pas,ian]@robots.ox.ac.uk,

More information

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen Video Mosaics for Virtual Environments, R. Szeliski Review by: Christopher Rasmussen September 19, 2002 Announcements Homework due by midnight Next homework will be assigned Tuesday, due following Tuesday.

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Semi-Dense Direct SLAM

Semi-Dense Direct SLAM Computer Vision Group Technical University of Munich Jakob Engel Jakob Engel, Daniel Cremers David Caruso, Thomas Schöps, Lukas von Stumberg, Vladyslav Usenko, Jörg Stückler, Jürgen Sturm Technical University

More information

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Presentation Outline 1 2 3 Sample Problem

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

Tutorial on 3D Surface Reconstruction in Laparoscopic Surgery. Simultaneous Localization and Mapping for Minimally Invasive Surgery

Tutorial on 3D Surface Reconstruction in Laparoscopic Surgery. Simultaneous Localization and Mapping for Minimally Invasive Surgery Tutorial on 3D Surface Reconstruction in Laparoscopic Surgery Simultaneous Localization and Mapping for Minimally Invasive Surgery Introduction University of Bristol using particle filters to track football

More information

Lecture 10: Multi view geometry

Lecture 10: Multi view geometry Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from

More information

Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras

Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras Dipl. Inf. Sandro Esquivel Prof. Dr.-Ing. Reinhard Koch Multimedia Information Processing Christian-Albrechts-University of Kiel

More information

Hidden View Synthesis using Real-Time Visual SLAM for Simplifying Video Surveillance Analysis

Hidden View Synthesis using Real-Time Visual SLAM for Simplifying Video Surveillance Analysis 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-13, 2011, Shanghai, China Hidden View Synthesis using Real-Time Visual SLAM for Simplifying

More information

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms L15. POSE-GRAPH SLAM NA568 Mobile Robotics: Methods & Algorithms Today s Topic Nonlinear Least Squares Pose-Graph SLAM Incremental Smoothing and Mapping Feature-Based SLAM Filtering Problem: Motion Prediction

More information

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20

Lecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20 Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit

More information

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo --

Computational Optical Imaging - Optique Numerique. -- Multiple View Geometry and Stereo -- Computational Optical Imaging - Optique Numerique -- Multiple View Geometry and Stereo -- Winter 2013 Ivo Ihrke with slides by Thorsten Thormaehlen Feature Detection and Matching Wide-Baseline-Matching

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting R. Maier 1,2, K. Kim 1, D. Cremers 2, J. Kautz 1, M. Nießner 2,3 Fusion Ours 1

More information

Removing Scale Biases and Ambiguity from 6DoF Monocular SLAM Using Inertial

Removing Scale Biases and Ambiguity from 6DoF Monocular SLAM Using Inertial 28 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 9-23, 28 Removing Scale Biases and Ambiguity from 6DoF Monocular SLAM Using Inertial Todd Lupton and Salah Sukkarieh Abstract

More information

An Overview of Matchmoving using Structure from Motion Methods

An Overview of Matchmoving using Structure from Motion Methods An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu

More information

C280, Computer Vision

C280, Computer Vision C280, Computer Vision Prof. Trevor Darrell trevor@eecs.berkeley.edu Lecture 11: Structure from Motion Roadmap Previous: Image formation, filtering, local features, (Texture) Tues: Feature-based Alignment

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Arun Das 05/09/2017 Arun Das Waterloo Autonomous Vehicles Lab Introduction What s in a name? Arun Das Waterloo Autonomous

More information