Simuntaneous Localisation and Mapping with a Single Camera. Abhishek Aneja and Zhichao Chen

Similar documents
Real-Time Simultaneous Localisation and Mapping with a Single Camera

Robotics. Lecture 8: Simultaneous Localisation and Mapping (SLAM)

Autonomous Mobile Robot Design

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Data Association for SLAM

Real-Time Localisation and Mapping with Wearable Active Vision

EKF Localization and EKF SLAM incorporating prior information

Robot Localization based on Geo-referenced Images and G raphic Methods

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Simultaneous Stereoscope Localization and Soft-Tissue Mapping for Minimal Invasive Surgery

Augmented Reality, Advanced SLAM, Applications

High-precision, consistent EKF-based visual-inertial odometry

Automatic Relocalisation for a Single-Camera Simultaneous Localisation and Mapping System

Revising Stereo Vision Maps in Particle Filter Based SLAM using Localisation Confidence and Sample History

Dealing with Scale. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Simultaneous Localization and Mapping

3D Model Acquisition by Tracking 2D Wireframes

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Real-Time Model-Based SLAM Using Line Segments

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics

arxiv: v1 [cs.cv] 18 Sep 2017

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

Geometrical Feature Extraction Using 2D Range Scanner

Autonomous Navigation for Flying Robots

3D Simultaneous Localisation and Map-Building Using Active Vision for a Robot Moving on Undulating Terrain

Localization, Where am I?

Long-term motion estimation from images

Vision-Aided Inertial Navigation with Line Features and a Rolling-Shutter Camera

Localization and Map Building

L15. POSE-GRAPH SLAM. NA568 Mobile Robotics: Methods & Algorithms

Models and State Representation in Scene: Generalised Software for Real-Time SLAM

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

Tightly-Integrated Visual and Inertial Navigation for Pinpoint Landing on Rugged Terrains

UNIVERSITÀ DEGLI STUDI DI GENOVA MASTER S THESIS

Monocular SLAM with Inverse Scaling Parametrization

Absolute Scale Structure from Motion Using a Refractive Plate

MULTI-MODAL MAPPING. Robotics Day, 31 Mar Frank Mascarich, Shehryar Khattak, Tung Dang

Gaze interaction (2): models and technologies

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using

Motion estimation of unmanned marine vehicles Massimo Caccia

Edge Landmarks in Monocular SLAM

BBR Progress Report 006: Autonomous 2-D Mapping of a Building Floor

ME 597/747 Autonomous Mobile Robots. Mid Term Exam. Duration: 2 hour Total Marks: 100

Zürich. Roland Siegwart Margarita Chli Martin Rufli Davide Scaramuzza. ETH Master Course: L Autonomous Mobile Robots Summary

Kalman Filter Based. Localization

Factorization with Missing and Noisy Data

3D Simultaneous Localisation and Map-Building Using Active Vision for a Robot Moving on Undulating Terrain

Robotic Perception and Action: Vehicle SLAM Assignment

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

On-line Convex Optimization based Solution for Mapping in VSLAM

Exam in DD2426 Robotics and Autonomous Systems

StereoScan: Dense 3D Reconstruction in Real-time

Robot vision review. Martin Jagersand

Ground Plane Motion Parameter Estimation For Non Circular Paths

Environment Identification by Comparing Maps of Landmarks

Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction

CHAPTER 5 MOTION DETECTION AND ANALYSIS

Face Detection and Tracking Control with Omni Car

High Accuracy Navigation Using Laser Range Sensors in Outdoor Applications

Self-calibration of a pair of stereo cameras in general position

Chapter 18. Geometric Operations

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Tracking a Mobile Robot Position Using Vision and Inertial Sensor

Using temporal seeding to constrain the disparity search range in stereo matching

Overlapping and Non-overlapping Camera Layouts for Robot Pose Estimation

Visual planes-based simultaneous localization and model refinement for augmented reality

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

15 Years of Visual SLAM

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018

ECSE-626 Project: An Adaptive Color-Based Particle Filter

CS4733 Class Notes, Computer Vision

Live Metric 3D Reconstruction on Mobile Phones ICCV 2013

Keeping features in the camera s field of view: a visual servoing strategy

Optimization of the Simultaneous Localization and Map-Building Algorithm for Real-Time Implementation

Towards Gaussian Multi-Robot SLAM for Underwater Robotics

CROSS-COVARIANCE ESTIMATION FOR EKF-BASED INERTIAL AIDED MONOCULAR SLAM

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

Segmentation and Tracking of Partial Planar Templates

CSE 586 Final Programming Project Spring 2011 Due date: Tuesday, May 3

Practical Robotics (PRAC)

Introduction to Mobile Robotics. SLAM: Simultaneous Localization and Mapping

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Simultaneous Localization and Mapping with Monocamera: Detection of Landing Zones

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Projector Calibration for Pattern Projection Systems

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

DYNAMIC POSITIONING OF A MOBILE ROBOT USING A LASER-BASED GONIOMETER. Joaquim A. Batlle*, Josep Maria Font*, Josep Escoda**

Learning to Track Motion

Feature Tracking and Optical Flow

Real-Time Monocular SLAM with Straight Lines

Artificial Intelligence for Robotics: A Brief Summary

Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences

Simultaneous Localization and Map-Building Using Active Vision

3D Sensing. 3D Shape from X. Perspective Geometry. Camera Model. Camera Calibration. General Stereo Triangulation.

Rectification and Distortion Correction

MASTER THESIS IN ELECTRONICS 30 CREDITS, ADVANCED LEVEL 505. Monocular Visual SLAM based on Inverse Depth Parametrization

9.7 Plane Curves & Parametric Equations Objectives

1 Introduction. 2 Real-time Omnidirectional Stereo

Transcription:

Simuntaneous Localisation and Mapping with a Single Camera Abhishek Aneja and Zhichao Chen 3 December,

Simuntaneous Localisation and Mapping with asinglecamera 1 Abstract Image reconstruction is common in the field of computer vision these days. But, reconstructing the image as well as at the same time tracking the camera position is a field still being worked on. This task of estimating camera motion from measurements of a continuously expanding set of selfmapped visual features is one of a class of problems known as Simultaneous Localisation and Mapping (SLAM) in the robotics community. For a camera moving in an unknown scenes is a challenging problem when working in a real time rather than offline processing condition We present a method for a single camera moving in a D and mapping of a sparse set of features using motion modelling and an information guided active measurement strategy. Where the location of the features and camrea motion is controlled by a simulator. Introduction Robot mapping has been an active area of research for the past few decades. It gives a spatial model of the physical environment through robot navigation. Structure from motion research in computer vision has reached the point where fully automated reconstruction of the trajectory of a video camera moving through a previously unknown scene is becoming routine [?],but these and other successful approaches seen to date have been formulated as off-line algorithms and required batch, simultaneous processing of all the images acquired in the sequence[].building a truly autonomous mobile robot, that would give the mapping of the unknown environment in real time, however, enforces hard constraints on the processing permissible and hence, still poses a great challenge. Is there a need for autonomous mobile robots in real world? They operate well in a defined region and need to be able to identify the limits of that region and avoid other potential hazards while carrying out its task. It may be necessary for a robot to monitor its position accurately and execute defnite movements, but possible to facilitate this in a simple way with external help such as a prior map with landmark beacons in known positions. Exploring remote or dangerous areas, although assistance may often still be possible here, require a mobile robot which can navigate truly autonomously [1]. Among previous work, that of Chiuso et al.[3] present a real-time, full-covariance Kalman Filter-based approach to sequential structure from motion, but aim towards model generation rather than localisation. Davison[],successful obtained results, in real time, for a camera moving in a 3D world and correctly estimating the landmarks position. Feature visibility is calculated based on the relative position of the camera and feature, and saved position of the camera from which the feature was initialised. We present a method to track a robot moving in a D world and at the same time localise the feature points using Extended Kalman Filter. The code has still not been tried for a real time sequence. It works well for a simulator created motion of the camera and features. 1

3 SLAM with First Order Uncertainity Propagation Extended Kalman Filter (EKF)-based algorithms, propagating first-order uncertainty in the coupled estimates of robot and map feature positions, combined with various techniques for reducing computational complexity in large maps, have shown great success in enabling robots to estimate their locations accurately and robustly []. The overall state x of the system is represented by a vector which can be partitioned into the state a of the camera (or robot) and the state p (i) of the enteries in the map surroundings. The state vector is accompanied by a covariance matrix P,which represents the uncertainity, to first order in all quantities of the state vector. x = a p (1) p () p (3). P = P aa P ay1 P ay.. P y1a P y1y1 P y1y.. P ya P xy1 P xy.. P ya P y1y1 P yy....... Features estimates p (i) can be added or deleted freely from the map as required, making the state vector x and P grow and shrink accordingly. In general, x and P vary in two steps: first, during motion, using motion model to calculate how the camera (or robot) moves and how its uncertainity increases. Second, when feature measurement are obtained, measurement model explains how the map and robot uncertainity can be reduced. The correlation between the position estimates of the cluster of close features whch are inherent in map building are uncertain with the world reference frame but highly correlated among themselves (i.e. their relative position are well known).holding correlation information means that measurements of any one of this cluster correctly affects estimate of the others, and is the key to being able to re-visit and recognise known areas after periods of neglect. Motion Model for a Smoothly Moving Camera We define the world coordinate frame World to be fixed in world, and Camera coordinate frame Camera fixed with reapect to camera. The position vector a is made up of the location of the camera defined by [a x (t)a y (t)a θ (t)] T. We asume the camera to be moving Figure 1: Frames and Vector Geometry at a constant velocity,but statistical model of its motion in a time step is that on average

undetermined accelerations occur with a Gaussian profle. So, we also need to add the velocity in the position vector a,which now becomesa =[a x (t)a y (t)a θ (t)ȧ x (t)ȧ y (t)ȧ θ (t)] T where ȧ x (t), ȧ y (t) represent a velocity in x and y direction and ȧ θ (t) represent the angular velocity. 5 Visual Feature Measurements In real sequence, as done by Davison[], features were found using Shi and Tomasi Tracker[5]. In our experiments, we place the features randomly in the image and start off with some known features and estimate rest to be away from the orignal features. In this section we consider the measurement model of the features already in the SLAM map. The estimate of the camera position a and feature position p (i) allow the value of the measurement to be predicted. From figure 1, the relative position of a feature point to the camera is expected to be: p (i) (t) =R[p (i) a(t)] (1) cos aθ sin a R(t) = θ sin a θ cos a θ where, p (i) (t)is the feature point relative to camera coordiante R is the Rotation Matrix Now the projection of features on the image plane,i.e. Changing from camera cordinate frame to worls coordinate frame can be represented by: βu β kf u cos aθ = 1 sin a θ sin a θ cos a θ p x (i) p y (i) a x (t) a y (t) where u is the principal point of the camera (in pixels). Putting these equations together yields u (i) (t) =u + f (p(i) x (p (i) y a x (t)) cos(a θ )+(p y (i) a y (t)) sin(a θ ) a y (t)) cos(a θ ) (p (i) x a x (t)) sin(a θ ) The uncertainity in this predicton, represented in the covariance matrix P, gives the shape of the Gaussian pdf over image coordinates and choosing a number of standard deviations deffines an elliptical search window within which the feature should lie with high probability. Experiments and Results.1 Simulation -1:Camera moving in a straight line and passing through the features The velocity is assumed to be 1m/s in x direction and m/s in y direction. The initial position of the camera frame is assumed to be zero and due to the uncertainty in the initial measurements the covariance. For the unknown fratures the initial estimates starte at m away from the orignal location of the feature both in x and y direction. The known features have zero uncertainity, so all the rows and columns in the covariance matrix P, corrosponding to these fratures have a value equal to zero. The uncertainity for the unknown features is high, so the P xx,p yy,p xy,p yx are set to a high value (between 75 to ). The features were successfully estimated with the uncertainity as a low as 1. cm. () 3

The results shown are for a 3step simulator with velocity of 1ms 1 and ms 1 in x and y direction. with 5 known features and unknown features.. Simulation :Camera moves along a straight line with a limited degree visual cone We start with feature points known for the initialization of EKF. A total of landmarks were mapped. The field of view of camera is. The camera moves with constant velocity.5m/second alone the x axis. The rotation velocity of camera is. We set high prior covariance P for unknown feature point. At each step, a predictive update was performed based on constant velocity motion model, and then a measurement based on only one feature point per step. When a landmark is out of view, it will be removed from state matrix. The ellipses around these landmarks illustrate the residual uncertainty that remains after mapping, as specified by the covariance matrix. In order to compare the uncertainty, the directions of all the ellipses are set to uniform..3 Simulation 3: Camera makes a circle. We still start with feature points known for the initialization of EKF. A total of landmarks were mapped, which are always in the field of the view of the camera. The maximum velocity of translation is.1m/second and maximum velocity rotation is /second. To make a circle, the camera states (velocity and rotation) have to vary continuously. 7 Conclusion Simulation in D show the EKF with a single camera measurement model can identify the camera localization as well as landmark positions. With more measurements over time, the uncertainty ellipses shrink gradually. The development in this paper can also be extended for the 3D reconstruction. Simulation shows if the known feature points are always in the field of view of camera, the estimates are more accurate and the system works stably. However, in the situation that the feature points are routinely in and out of the filed of view of camera, the estimate errors might be bigger and even result in failure of system. For the future research, we should focus on coping with accumulated error and increasing the stability of system Results of First Expetiment camera state in steps x feature position with robot movment landmark states x and y of 19, estimated position of fearture 11. 55 5 5 55 5 5 5 y axis 35 3 5 15 5 3 5 x axis Camera and Features 3 5 15 5 3 Robot Movement Features and Camera Feature position 1 1 5 15 5 3 time Feature over time est x lm x est y lm y 35 3 5 15 5 3 5 Uncertainity

5 5 15 5 estimated position of fearture 15, step estimated position of fearture 15, step 11 estimated position of fearture 15, step 13 estimated position of fearture 15, step 17 3 3 7 3 5 5 5 3 3 15 1 1 Step - 5 7 9 11 1 13 5 7 9 11 1 13 Step 11 Step 13 Results of Second Expetiment 1 Step 17 1 1 1 1 1 1 1 1 5 15 5 3 35 5 15 5 3 35 5 15 5 3 35 5 15 5 3 35 Step 1 Step 7 Step 9 Results of Third Expetiment Final Step 1 1 1 1 1 1 1 1 5 5 5 5 5 5 5 5 Step 1 Step 15 Step 5 Final Result References [1] Andrew Davison, D.Phil (Ph.D) Thesis, Mobile Robot Navigation Using ActiveVision, University of Oxford, 199. [] Andrew J. Davison, Real-Time Simultaneous Localisation and Mapping with a Single Camera,Proc. of the Internation Conference on Computer Vision, Nice, October 13 -, 3. [3] A. Chiuso, P. Favaro, H. Jin, and S. Soatto. MFm. : 3-D motion from -D motion causally integrated over time. In Proceedings of the th European Conference on Computer Vision, Dublin,. [] E. Foxlin. Generalized architecture for simultaneous localization, auto-calibration and map-building. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems,. [5] J. Shi and C. Tomasi. Good features to track. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 593., 199. 5