Aerial Robotic Autonomous Exploration & Mapping in Degraded Visual Environments Kostas Alexis Autonomous Robots Lab, University of Nevada, Reno
Motivation Aerial robotic operation in GPS-denied Degraded Visual Environments (DVE) Ability to autonomously explore, monitor and map in conditions of complete darkness, haze and other DVEs. Extend the operational abilities of aerial robots. Critical step for industrial, inspection & maintenance, safety and security applications. Indicative application domains: Nuclear Site Decommissioning Oil & Gas Industry Inspection Infrastructure Inspection Surveillance, Security Monitoring
Motivation Aerial robotic operation in GPS-denied Degraded Visual Environments (DVE) Ability to autonomously explore, monitor and map in conditions of complete darkness, haze and other DVEs. Extend the operational abilities of aerial robots. Critical step for industrial, inspection & maintenance, safety and security applications. Indicative application domains: Nuclear Site Decommissioning Oil & Gas Industry Inspection Infrastructure Inspection Surveillance, Security Monitoring
Research Directions Multi-Modal sensor fusion to enable the GPS-denied localization and mapping in DVEs. A Path Planning ensemble for autonomous exploration in DVEs or optimized inspection. Active Perception approach for reliable autonomous operation subject to the challenges of visually-degraded environments. Pure volumetric exploration is not sufficient The selected viewpoints and their order heavily influences the localization of the robot.
Multi-Modal Mapping Unit Multi-modal: Visual (LED-synchronized or Near Infrared), Inertial, 3D Time-of-Flight Future extensions: sound, RADAR Hardware time-synchronization of visual-inertial data Software-synchronization of VI-depth data Intel NUC5i7RYH responsible for the localization and mappin pipeline
Multi-Modal Mapping Unit Multi-modal fusion baseline and new research direction: Baseline #1: Visual-Inertial enhanced Geometrically Stable ICP Baseline #2: Visual-Inertial / Depth odometry estimates looselycoupled through an EKF Research Direction: Multi-Modal Features Research on top of L O. Vasconcelos, E. R. Nascimento, M. F. M. Campos, KVD: Scale Invariant Keypoints by Combining Visual and Depth Data
Multi-Modal Mapping Unit Multi-modal fusion baseline and new research direction: Baseline #1: Visual-Inertial enhanced Geometrically Stable ICP Baseline #2: Visual-Inertial / Depth odometry estimates loosely-coupled through an EKF
Towards a Complete Planning Ensemble Complete Path Planning ensemble addressing the problems of optimized coverage and robustly autonomous exploration. Key challenge of Exploration in DVEs: Sensor data ill-conditioning is to be expected. Localizability-aware exploration is the key Active perception and belief propagation methods provide the theoretical grounds for our approach Future direction: Curiosity-driven exploration
Towards a Complete Planning Ensemble Complete Path Planning ensemble addressing the problems of optimized coverage and robustly autonomous exploration. Key challenge of Exploration in DVEs: Sensor data ill-conditioning is to be expected. Localizability-aware exploration is the key Active perception and belief propagation methods provide the theoretical grounds for our approach Future direction: Curiosity-driven exploration
Uncertainty-aware Exploration & Mapping Problem Definition The overall problem is that of exploring an unknown bounded 3D volume V E R 3, while aiming to minimize the localization and mapping uncertainty as evaluated through a metric over the robot pose and landmarks probabilistic belief. Problem 1: Volumetric Exploration Given a bounded volume V E, find a collision free path σ starting at an initial E configuration ξ init Ξ that leads to identifying the free and occupied parts V free and VE occ when being executed, such that there does not exist any collision free configuration from which any piece of V E E {V free, V E occ } could be perceived. Combined Problem Problem 2: Belief Uncertainty-aware planning Given a V M V E, find a collision free path σ M starting at an initial configuration ξ 0 Ξ and ending in a configuration ξ final Ξ that aims to improve the robot s localization and mapping confidence by following paths of optimized expected robot pose and tracked landmarks covariance.
Robot Configuration
Uncertainty-aware Exploration & Mapping Receding Horizon Exploration and Mapping Planner ( ) Two-steps / nested optimization Path Planning paradigm
rhemplanner - Exploration Step
rhemplanner - Exploration Step Exploration Gain with probabilistic re-observation Aiming to maximize newly explored space and reobserve space with decreased confidence of being mapped as occupied.
rhemplanner - Uncertainty-aware Step
rhemplanner - Uncertainty-aware Step The robot performs onboard localization and mapping For the case of our experiments it performs visual-inertial localization The assumptions are: Pose, features/landmarks and their uncertainties are estimated Dense, volumetric mapping takes place To get an estimate about its pose, it relies on tracking landmarks from its sensor systems. The system performs odometry in an EKF-fashion and the overall state of the filter is:
rhemplanner - Uncertainty-aware Step Belief Propagation: in order to identify the paths that minimize the robot uncertainty, a mechanism to propagate the robot belief about its pose and the tracked features has to be established. Using Michael Bloesch, Sammy Omari, Marco Hutter, Roland Siegwart, ROVIO: Robust Visual Inertial Odometry Using a Direct EKF-Based Approach, IROS 2015
rhemplanner - Uncertainty-aware Step Propagate the robot s belief about its pose and the tracked landmarks: Prediction step State Propagation Step
rhemplanner - Uncertainty-aware Step Propagate the robot s belief about its pose and the tracked landmarks: Update step Filter Update Step By stacking the above terms for all visible landmarks, standard EKF update step is directly performed to derive the new estimate of the robot belief for its state and the tracked features. Identify which viewpoints are expected to be visible given the next pose and the known map Compute the propagated belief covariance matrix
rhemplanner - Uncertainty-aware Step Propagate the robot s belief about its pose and the tracked landmarks: Update step Filter Update Step By stacking the above terms for all visible landmarks, standard EKF update step is directly performed to derive the new estimate of the robot belief for its state and the tracked features.
rhemplanner - Uncertainty-aware Step Uncertainty optimization: to be able to derive which path minimizes the robot uncertainty about its pose and the tracked landmarks, a metric of how small the covariance ellipsoid is has to be defined. What metric?
rhemplanner - Uncertainty-aware Step Uncertainty optimization: to be able to derive which path minimizes the robot uncertainty about its pose and the tracked landmarks, a metric of how small the covariance ellipsoid is has to be defined. D-optimality metric: Broadly: maximize the determinant of the information matrix X X of the design. This criterion results in maximizing the differential Shannon information content of the parameter estimates.
rhemplanner Algorithm ξ 0 current vehicle configuration First Planning Step Initialize T E with ξ 0 E g best 0 // Set best exploration gain to zero n best n 0 ξ 0 // Set best best exploration node to root N T E Number of nodes in T E While N E E T < N max E or g best == 0 do Incrementally build T E E by adding n new N T E N T E + 1 E ξ new E if ExplorationGain n new E > g best then E n new E n new E g best if N E E T > N TOT E ExplorationGain n new then Terminate planning E, n E RH, ξ RH ExtractBestPathSegment n best σ RH S ξrh LocalSet(ξ RH )
rhemplanner Algorithm E Propagate robot belief along σ RH a 1 // number of admissible paths Second Planning Step g M a BeliefGain(σ E RH ) g M M best g a // straight path belief gain σ M best M σ RH while N M T < N M max or V T M = S ξrh do Incrementally build T M M by adding n new ξ new Propagate robot belief from current to planned vertex if ξ new S ξrh then Add new vertex n M new at ξ RH and connect a a + 1 σ M α ExtractBranch(n M new ) g α M BeliefGain(σ a M ) if g M α < g M best then return σ M σ M σ a M g M best g a M
rhemplanner Complexity Analysis
rhemplanner Complexity Analysis
rhemplanner Evaluation (Experimental)
Uncertainty-aware Exploration & Mapping Kostas Alexis, Autonomous Robots Lab, 05.03.2016
Open Source contributions Open Source Code: Structural Inspection Planner: https://github.com/ethz-asl/structuralinspectionplanner Next-Best-View Planner: https://github.com/ethz-asl/nbvplanner Receding Horizon Exploration and Mapping Planner: https://github.com/unr-arl/rhemplanner Associated Datasets: Structural Inspection Planner: https://github.com/ethz-asl/structuralinspectionplanner/wiki/example-results http://changedetectiondataset.wikispaces.com/ Next-Best-View Planner: https://github.com/ethz-asl/nbvplanner/wiki/example-results Receding Horizon Exploration and Mapping Planner: https://github.com/unr-arl/icra-2017-datasets Solar-powered UAV Sensing & Mapping: http://projects.asl.ethz.ch/datasets/doku.php?id=fsr2015 Change Detection Dataset http://changedetectiondataset.wikispaces.com/
Thank you! Please ask your question!