Terrain Data Real-time Analysis Based on Point Cloud for Mars Rover

Similar documents
Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Autonomous Robot Navigation: Using Multiple Semi-supervised Models for Obstacle Detection

Neural Networks Terrain Classification using Inertial Measurement Unit for an Autonomous Vehicle

Visual Perception for Robots

Simultaneous Optimization of a Wheeled Mobile Robot Structure and a Control Parameter

Omni Stereo Vision of Cooperative Mobile Robots

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Obstacle Detection From Roadside Laser Scans. Research Project

Distance Sensors: Sound, Light and Vision DISTANCE SENSORS: SOUND, LIGHT AND VISION - THOMAS MAIER 1

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Online Adaptive Rough-Terrain Navigation in Vegetation

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Low-level Image Processing for Lane Detection and Tracking

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Lunar Rover Virtual Simulation System with Autonomous Navigation

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

Using 3D Laser Range Data for SLAM in Outdoor Environments

Kinect-based identification method for parts and disassembly track 1

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

TURN AROUND BEHAVIOR GENERATION AND EXECUTION FOR UNMANNED GROUND VEHICLES OPERATING IN ROUGH TERRAIN

DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS

VISION-BASED PERCEPTION AND SENSOR DATA INTEGRATION FOR A PLANETARY EXPLORATION ROVER

Measurement of Pedestrian Groups Using Subtraction Stereo

Influence of the Gradient of the Weighing Site

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle

Clark F. Olson. Jet Propulsion Laboratory, California Institute of Technology Oak Grove Drive, MS , Pasadena, CA 91109

Towards Autonomous Vehicle. What is an autonomous vehicle? Vehicle driving on its own with zero mistakes How? Using sensors

Lidar Based Off-road Negative Obstacle Detection and Analysis

Terrain Roughness Identification for High-Speed UGVs

Introduction to Autonomous Mobile Robots

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

12/12 A Chinese Words Detection Method in Camera Based Images Qingmin Chen, Yi Zhou, Kai Chen, Li Song, Xiaokang Yang Institute of Image Communication

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

NuBot Team Description Paper 2013

Proc. 14th Int. Conf. on Intelligent Autonomous Systems (IAS-14), 2016

Egomotion Estimation by Point-Cloud Back-Mapping

Motion Planning of a Robotic Arm on a Wheeled Vehicle on a Rugged Terrain * Abstract. 1 Introduction. Yong K. Hwangt

Wavelet Packet Decomposition Based Traversable Terrain Classification for Autonomous Robot Navigation

Path Planning Using 3D Grid Representation in Complex Environment

Low-level Image Processing for Lane Detection and Tracking

On-road obstacle detection system for driver assistance

Research and Application of Unstructured Data Acquisition and Retrieval Technology

Segway RMP Experiments at Georgia Tech

Seminar Heidelberg University

EXOMARS ROVER VEHICLE PERCEPTION SYSTEM ARCHITECTURE AND TEST RESULTS

B-Field Scanner Software

Final Project Report: Mobile Pick and Place

Research on-board LIDAR point cloud data pretreatment

Research Article. ISSN (Print) *Corresponding author Chen Hao

National Science Foundation Engineering Research Center. Bingcai Zhang BAE Systems San Diego, CA

Visual Odometry for Non-Overlapping Views Using Second-Order Cone Programming

Calibration of a rotating multi-beam Lidar

Presentation Outline

Object Classification in Domestic Environments

Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

Ceilbot vision and mapping system

Terrain Traversability Analysis for off-road robots using Time-Of-Flight 3D Sensing

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

Set up and Foundation of the Husky

FAST HUMAN DETECTION USING TEMPLATE MATCHING FOR GRADIENT IMAGES AND ASC DESCRIPTORS BASED ON SUBTRACTION STEREO

A SURVEY ON SENSING METHODS AND FEATURE EXTRACTION ALGORITHMS FOR SLAM PROBLEM

OBSTACLE DETECTION USING STRUCTURED BACKGROUND

Stereo vision. Many slides adapted from Steve Seitz

Real-time Stereo Vision for Urban Traffic Scene Understanding

Open Access The Kinematics Analysis and Configuration Optimize of Quadruped Robot. Jinrong Zhang *, Chenxi Wang and Jianhua Zhang

Study on the Signboard Region Detection in Natural Image

Introduction to Information Science and Technology (IST) Part IV: Intelligent Machines and Robotics Planning

Vehicle Detection Method using Haar-like Feature on Real Time System

COMPARISION OF AERIAL IMAGERY AND SATELLITE IMAGERY FOR AUTONOMOUS VEHICLE PATH PLANNING

Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots

Laser Sensor for Obstacle Detection of AGV

Graphics Processing Unit (GPU) Acceleration of Machine Vision Software for Space Flight Applications

Vol. 21 No. 6, pp ,

Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments

L7 Raster Algorithms

Mobile Robots Locomotion

Development of a Visual Odometry System for a Wheeled Robot on Loose Soil Using a Telecentric Camera

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d

Flexible Calibration of a Portable Structured Light System through Surface Plane

Stereo-Based Obstacle Avoidance in Indoor Environments with Active Sensor Re-Calibration

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving

Fast and Reliable Obstacle Detection and Segmentation for Cross-country Navigation

Research Article Dynamic Rocker-Bogie: Kinematical Analysis in a High-Speed Traversal Stability Enhancement

Correcting INS Drift in Terrain Surface Measurements. Heather Chemistruck Ph.D. Student Mechanical Engineering Vehicle Terrain Performance Lab

Robot localization method based on visual features and their geometric relationship

Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras

Non-flat Road Detection Based on A Local Descriptor

Team Description Paper Team AutonOHM

LAUROPE Six Legged Walking Robot for Planetary Exploration participating in the SpaceBot Cup

A Comparison between Active and Passive 3D Vision Sensors: BumblebeeXB3 and Microsoft Kinect

Intelligent Outdoor Navigation of a Mobile Robot Platform Using a Low Cost High Precision RTK-GPS and Obstacle Avoidance System

Slope Traversal Experiments with Slip Compensation Control for Lunar/Planetary Exploration Rover

Improving Vision-Based Distance Measurements using Reference Objects

CANAL FOLLOWING USING AR DRONE IN SIMULATION

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Research Stay. Autonomous Systems Lab Swiss Federal Institute of Technology (Zurich, Switzerland)

Transcription:

Terrain Data Real-time Analysis Based on Point Cloud for Mars Rover Haoruo ZHANG 1, Yuanjie TAN, Qixin CAO. Abstract. With the development of space exploration, more and more aerospace researchers pay attention to Mars, and Mars rover that has a capability of autonomous navigation will gradually become the focus of the study. Above all, we need to realize real-time modeling and analysis of unknown environment, providing necessary realtime information for Mars rover motion planning, and ultimately achieving autonomous navigation. Firstly, briefly introducing the research content. Then, using visual sensor on Mars rover to acquire 3D point cloud area in real-time environment. And creating 3D grid map to reduce the complexity of 3D map for real-time analysis. Thirdly, relying on the established 3D grid map, analysing and identifying terrain and environmental information and providing necessary information for Mars rover motion planning. Keywords: vision sensor, 3D point cloud, 3D grid map, obstacle crossing possibility, obstacle feature. 1 Introduction The research is to achieve modelling and terrain data real-time analysis of environment based on point cloud for Mars rover. The target is to establish stereo vision sensor system on Mars rover, and to achieve 3D point cloud creating, modelling and real-time environmental analysis. According to the particular rover model, establishing stereo vision sensor and connecting to the rover on-board computer, writing the program by using open source PCL library based on Microsoft Visual C++. Then combining with Mars rover motion planning in Webots robot simulation environment, finally running the program on the Mars rover. And the research is to study the methods to solve the problem about terrain point cloud data analysis for Mars rover. And finding the method to reduce the complexity of the 3D point cloud map, and creating 3D grid map which have amount of sufficient 1 Haoruo ZHANG ( ) Institute of Robotics, Shanghai Jiao Tong University,.800 DongChuan Road, Minhang, Shanghai, China e-mail: haoruozhang@foxmail.com

2 Haoruo Zhang information in a short time [1]. According to the complex environment on Mars, studying how to get further information on the environment analysis to identify the area which can be passed through or not, while calculating the area part of the characteristic data by obstacle features analysis, and then provide the data to motion planning for Mars rover [2]. The main framework for modelling and terrain data analysis system as follows. Hardware Part Software Part Stereoscopic Vision Sensor Building Sensor Platform Acquiring Original 3D Point Cloud Data Building 3D Grid Map Connecting to Mars Rover On-board Computer Analyzing Obstacle Crossing Possibility Analyzing Obstacle Feature Fig. 1 Modelling and terrain data analysis system of environment for Mars rover 2 Original 3D Point Cloud Data Acquisition and Processing The research is to study the methods to solve the problem about terrain point cloud data analysis for Mars rover, and we can utilizes stereo vision sensor to acquire original 3D point cloud data, for example, Bumblebee2 [3]. The research mostly focuses on the 3D point cloud processing, and using Kinect sensor as stereo vision sensor temporarily. And the sensor platform is built on the Mars rover which is six wheeled rocker type mobile robot. And the stereo vision sensor platform can adjust the vision angle in a certain range. 2.1 Original 3D Point Cloud Data Acquisition Firstly, using stereo vision sensor can get depth image and color image of current environment in general. And the original 3D point cloud map can be built by depth image and color image [4]. The research also proposes a method to build the original 3D point cloud map. The research uses stereo vision sensor to acquire depth image and color image of original environment, as follows.

Terrain Data Real-time Analysis Based on Point Cloud for Mars Rover 3 Fig. 2 Original environment data acquisition The left is depth image and the right is color image. The research builds the 3D point cloud by integrating depth image and color image, as follows. In general stereo vision sensor coordinate system, Z-axis is the normal axis of lens, the depth value of each pixel in the depth image represents the Z value of the actual coordinate space, and X, Y is the index position of the depth image of each pixel (X is column, Y as row), so X, Y coordinate space needs to be converted to the actual X, Y value. Firstly, acquiring the field of view of stereo vision sensor, including horizontal and vertical viewing angles. Secondly, calculating the proportion of unit conversion. Fig. 3 Proportion of unit conversion Angle α is horizontal viewing angle and angle β is vertical viewing angle. RealWorldXtoZ = 2tan(α 2), RealWorldYtoZ = 2tan(β/2) (1) Thirdly, calculating the actual environment of X, Y values. And nxres is the pixel value in the depth image in the width direction, nyres is the pixel value in the height direction. rmalizedx = X/ nxres 0.5, rmalizedy = 0.5 Y/nYRes (2) X = rmalizedx Z RealWorldXtoZ (3) Y = rmalizedy Z RealWorldXtoZ (4)

4 Haoruo Zhang Fig. 4 Comparison between actual environment and 3D point cloud Table 1 Errors between actual value and 3D point cloud value Feature Actual value (m) 3D value (m) Errors (%) 1 0.3530 0.35182 0.33433 2 0.0600 0.05927 1.21667 3 0.2260 0.22435 0.73009 4 0.3820 0.38104 0.25131 2.2 Building 3D Grid Map Mars rover motion planning needs large amounts of real-time environment data [5]. Therefore, it is necessary to build 3D grid map, in this way, it can reduce the complexity of creating maps, and reduce running time of the algorithm. Fig. 5 3D grid map (size 0.1m left, size 0.05m right) PCL open source library filter module has filter function that can remove unwanted point in the 3D point cloud data with different filters [6]. And the research uses filter module class ApproximateVoxelGrid to simplify 3D point cloud map. Class

Terrain Data Real-time Analysis Based on Point Cloud for Mars Rover 5 ApproximateVoxelGrid can generate 3D grid map with the input of original 3D point cloud map, and the research takes advantage of all the center of the grid to approximate point sets that are contained in the grid. The research choices for 3D grid size is 0.05m. The data points are reduced from 163 209 points to 2440 points, and the size of point cloud file reduce from 7078KB to 82KB, ensuring the environmental characteristics of the information simplify data. 3 Terrain Data Analysis of Single-frame 3D Grid Map As for terrain data analysis, the coordinate system of 3D grid map needs to be changed [7]. Due to the current coordinate values obtained 3D grid map is based stereo vision sensor coordinate system, must be transformed into a coordinate system of Mars rover. 3.1 Analysing Obstacle Crossing Possibility The research gained some motion data of Mars rover about obstacle crossing from physical experiments, namely the rover's wheels cannot pass through slope of about 30 degrees or more, and the height of the obstacle radius rover wheels (11cm) above. So the research needs to search all of the data points on the 3D grid map, to detect whether obstacle whose slope angle is more than 30 degrees exists or not. So, we need to search all the data points of the 3D grid map and find out point sets (two points) that meat some requirements. Firstly, the difference between X values of the two points is less than 0.1m, while the difference between Y values of the two points is greater than 0.05m. In this way, the two points in the point set are closest with each other in the 3D grid map (grid size 0.05m), and they have different Y values. Then, calculating the slope between two points in all point sets. If the point set whose slope angle is more than 30 degree, the algorithm will judge whether the point set is located on the area that the wheels of Mars rover will pass through or not. And X value of the area is more than 0.2m or less than -0.2m. And if X value of points is less than 0.2m and more than -0.2m, the algorithm will judge whether Z value of these points is less than the bottom height of Mars rover or not, namely if Z value is more than 0.32m, the area could not be passed through. In addition, if the point set whose slope angle is more than 30 degree is located in the area whose X value is less than 0.2m and more than -0.2m, the current terrain cannot be passed through.

6 Haoruo Zhang Start Searching point sets Slope > 30 X < 0.2m Z > 0.32m Cannot pass through Pass through Fig. 6 Flow diagram of obstacle crossing possibility analysis 3.2 Analysing Obstacle Feature After analyzing obstacle crossing possibility, if the current 3D grid map is can be passed through, it is necessary to analyze obstacle feature. In this way, Mars rover could take a different strategy under different condition of obstacle feature. In the algorithm, the input is pointer of point cloud, and the output is a variable named workcase. During the experiment, the main analysis is about some relatively simple obstacle feature. If the value of workcase is 1, it indicates that the ground is flat; 2 indicates uphill straightly; 3 indicates uphill straightly on left side; 4 indicates uphill diagonally on left side; 5 indicates uphill straightly on right side; 6 indicates uphill diagonally on right side; 7 indicates uphill diagonally; 8 indicates downhill straightly; 9 or 10 indicates downhill on one side (left and right). Firstly, the algorithm will judge whether the area is workcase 1 or not by average and variance of Z value. Then the algorithm selects four arrays of data points whose X value are -0.45m, -0.25m, 0.25m and 0.45m respectively. And the Y value of data points in every array are sorted by size. In addition, the algorithm will search the points whose Z value is more than 0.05m in every array and assign the least Y value to variable distance. Secondly, the algorithm will judge whether the difference between distances (X value -0.45m and 0.45m) is less than 0.1m or not. And the difference can be signed by Diff (-0.45 to 0.45). The other terrain data analysis is as follows.

Terrain Data Real-time Analysis Based on Point Cloud for Mars Rover 7 Start 0<E(Z)<0.05m & D(Z)<0.1 Flat ground workcase=1 E(Z)<0 Diff(-0.45 to 0.45)<0.1m Diff(-0.45 to 0.45)<0.1m Uphill straightly workcase=2 Downhill on one side workcase=9 or 10 empty array when X=0.25m or 0.45m Diff(0.25 to 0.45)<0.1m empty array when X=-0.25m or -0.45m Diff(-0.25 to -0.45)<0.1m Uphill diagonally on right side workcase=6 Uphill diagonally workcase=7 Uphill diagonally on left side workcase=4 Uphill straightly on right side workcase=5 Uphill straightly on left side workcase=3 Downhill straightly workcase=8 Fig. 7 Flow diagram of obstacle feature analysis 3.3 Experiments under Webots Robot Simulation Environment Fig. 8 Algorithm experiments under Webots robot simulation environme

8 Haoruo Zhang Table 2 Errors between actual value and analysis value in experiments Obstacle Feature Actual value Analysis value Errors (%) 2 distance 0.6750m 0.6763m 0.1926 gradient 20.0 20.0021 0.0105 3 distance 0.4740m 0.4750m 0.2110 gradient 20.0 20.0014 0.0070 4 alpha -26.0-26.6887 2.6488 8 distance 1.2950m 1.3007m 0.4402 gradient -11.5-11.4787 0.1852 4 Conclusion Considering the size of Mars rover, Errors between actual value and analysis value are within an acceptable range. According to figure 8, the algorithm can realize obstacle feature recognition and get result in a short time about 0.5s. In conclusion, simulation experiments verify the accuracy and real-time of the algorithm. Acknowledgment Supported by National Natural Science Foundation of China (Grant.61273331). References 1. Andreasson, H.,Treptow, A.,Duckett, T. Localization for mobile robots using panoramic vision, local features and particle filter[a]. In 2005; 3348-3353 2. Bajracharya M, Maimone M W, Helmick D. Autonomy for mars rovers: Past, present, and future [J]. Computer, 2008, 41(12): 44-50 3. Kim I H, Kim D E, Cha Y S, et al. An embodiment of stereo vision system for mobile robot for real-time measuring distance and object tracking[c]//control, Automation and Systems, 2007. ICCAS'07. International Conference on. IEEE, 2007: 1029-1033 4. Matthies L, Kelly A, Litwin T, et al. Obstacle detection for unmanned ground vehicles: A progress report [M]. Springer London, 1996 5. Manduchi R, Castano A, Talukder A, et al. Obstacle detection and terrain classification for autonomous off-road navigation[j]. Autonomous robots, 2005, 18(1): 81-102 6. Nguyen D V, Kuhnert L, Schlemper J, et al. Terrain classification based on structure for autonomous navigation in complex environments[c]//communications and Electronics (ICCE), 2010 Third International Conference on. IEEE, 2010: 163-168 7. Su L, Luo C, Zhu F. Obtaining obstacle information by an omnidirectional stereo vision system[c]//information Acquisition, 2006 IEEE International Conference on. IEEE, 2006: 48-52