Filtering and mapping systems for underwater 3D imaging sonar

Similar documents
3D object recognition used by team robotto

Using Augmented Measurements to Improve the Convergence of ICP. Jacopo Serafin and Giorgio Grisetti

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

3D Environment Reconstruction

Robotics Programming Laboratory

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

A descriptor and voting scheme for fast 3D self-localization in man-made environments

Accurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion

A Comparative Evaluation of 3D Keypoint Detectors in a RGB-D Object Dataset

CSC494: Individual Project in Computer Science

CRF Based Point Cloud Segmentation Jonathan Nation

PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING

Real-Time RGB-D Registration and Mapping in Texture-less Environments Using Ranked Order Statistics

Ensemble of Bayesian Filters for Loop Closure Detection

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Robotics. Lecture 7: Simultaneous Localisation and Mapping (SLAM)

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

arxiv: v2 [cs.ro] 19 Sep 2016

Estimating Geospatial Trajectory of a Moving Camera

Structured light 3D reconstruction

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

Fast and Robust 3D Feature Extraction from Sparse Point Clouds

A Rapid Automatic Image Registration Method Based on Improved SIFT

CS4670: Computer Vision

Plane-Based Registration of Sonar Data for Underwater 3D Mapping

Removing Moving Objects from Point Cloud Scenes

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

Lecture 10 Detectors and descriptors

Efficient Surface and Feature Estimation in RGBD

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

A Real-Time RGB-D Registration and Mapping Approach by Heuristically Switching Between Photometric And Geometric Information

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

Local features: detection and description. Local invariant features

Fast Sampling Plane Filtering, Polygon Construction and Merging from Depth Images

A New Approach For 3D Image Reconstruction From Multiple Images

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Monocular SLAM for a Small-Size Humanoid Robot

CS 231A Computer Vision (Winter 2014) Problem Set 3

Registration of Noisy Point Clouds using Virtual Interest Points

Efficient SLAM Scheme Based ICP Matching Algorithm Using Image and Laser Scan Information

3D Perception. CS 4495 Computer Vision K. Hawkins. CS 4495 Computer Vision. 3D Perception. Kelsey Hawkins Robotics

Feature Matching and Robust Fitting

Basics of Localization, Mapping and SLAM. Jari Saarinen Aalto University Department of Automation and systems Technology

Image Processing. Image Features

Ensemble Learning for Object Recognition and Pose Estimation

Robust Moving Least-squares Fitting with Sharp Features

Temporal Integration of Feature Correspondences For Enhanced Recognition in Cluttered And Dynamic Environments

Coarse-to-fine image registration

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

10/03/11. Model Fitting. Computer Vision CS 143, Brown. James Hays. Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem

Feature Based Registration - Image Alignment

Object Reconstruction

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Robot localization method based on visual features and their geometric relationship

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

COMPARISON OF 2D AND 3D APPROACHES FOR THE ALIGNMENT OF UAV AND LIDAR POINT CLOUDS

SLAM with Object Discovery, Modeling and Mapping

Chapter 3 Image Registration. Chapter 3 Image Registration

3D Photography: Active Ranging, Structured Light, ICP

Local features: detection and description May 12 th, 2015

L17. OCCUPANCY MAPS. NA568 Mobile Robotics: Methods & Algorithms

3D Reconstruction of a Hopkins Landmark

Visual Recognition and Search April 18, 2008 Joo Hyun Kim

Automatic Pipeline Generation by the Sequential Segmentation and Skelton Construction of Point Cloud

Surface Registration. Gianpaolo Palma

3D Photography: Stereo

Point Clouds Registration with Probabilistic Data Association

AUTOMATIC RECTIFICATION OF SIDE-SCAN SONAR IMAGES

The SIFT (Scale Invariant Feature

Incremental Structured ICP Algorithm

3D Models from Range Sensors. Gianpaolo Palma

Rigid ICP registration with Kinect

Panoramic Image Stitching

Robot Localization based on Geo-referenced Images and G raphic Methods

IRON: A Fast Interest Point Descriptor for Robust NDT-Map Matching and its Application to Robot Localization*

Designing Applications that See Lecture 7: Object Recognition

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

A Method of Annotation Extraction from Paper Documents Using Alignment Based on Local Arrangements of Feature Points

Srikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah

Recognition of Objects and Places in 3D Point Clouds for Robotic Applications

Octree-Based Obstacle Representation and Registration for Real-Time

Feature extraction for underwater visual SLAM

Global localization from a single feature correspondence

Patch-based Object Recognition. Basic Idea

arxiv: v1 [cs.ro] 14 Mar 2016

Geometrical Feature Extraction Using 2D Range Scanner

Motion Estimation and Optical Flow Tracking

Automatic Image Alignment (feature-based)

Corner Detection. GV12/3072 Image Processing.

Matching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

Alignment of Centimeter Scale Bathymetry using Six Degrees of Freedom

Implicit Shape Models For 3D Shape Classification With a Continuous Voting Space

Image Features: Detection, Description, and Matching and their Applications

Registration of Low Cost Maps within Large Scale MMS Maps

CS5670: Computer Vision

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Robot Mapping for Rescue Robots

Robot Autonomy Final Report : Team Husky

Transcription:

Filtering and mapping systems for underwater 3D imaging sonar Tomohiro Koshikawa 1, a, Shin Kato 1,b, and Hitoshi Arisumi 1,c 1 Field Robotics Research Group, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Umezono, Tsukuba city, Ibaraki, Japan a <koshikawa.tomohiro@aist.go.jp>, b <shin.kato@aist.go.jp >, c <h-arisumi@aist.go.jp> Keywords: sonar, filtering, 3d mapping, point cloud Abstract. In recent years, seawalls and piers are aging in Japan and using of sonar and robot in underwater is expected to check of those parts. It is possible to check condition in bad environment. Otherwise, sonar data has much occlusion and clutter owing to flowing water stream and waves. So automatic filtering is difficult and often do it by human. In this paper, we propose new method of automatic filtering and mapping for such sonar data. To achieve this goal, we apply filtering method and object recognition method of point cloud processing. By using these method, we made new method of robust mapping system for underwater sonar. 1. Introduction 1.1 Related work Sonar and camera are used for measuring underwater. Sonar has bad accuracy and data density relatively to laser range finder. So it is hard to use sonar for mapping and navigation[1]. Particularly, it is hard to extract robust feature points[2]. So by using high performance IMU(Inertial Measuring Unit) and camera with sonar, they improve position accuracy[3,4]. In the case of using only sonar data, surface smoothing and ICP algorithm are used[5] from of old. Also, there are methods of mapping by making use of plane matching in river[6] and data spectrum to know data feature[7]. Recently, Aitor Aldoma made mapping system[8] with IMU and Graph SLAM, one of the most major SLAM (Simultaneous Localization and Mapping) method. As other method, Thomas Guerneve made mapping system[9] matching object to 3D CAD data. 1.2 Target data We measured a triplen and wall in water by moving sonar from right to left (refer with: Fig. 1), and use the data for mapping. We measure two stationary objects by moving sonar. 3D imaging sonar which we used has angle of view about 55 degrees in horizontal direction and about 40 degrees in vertical direction. The accuracy of the data is 26cm. In this measurement, we move sonar in the pool whose horizontal length is about 10m and depth is about 8m. The measured data has 3D coordinate called Point Cloud. In this measurement, the data has about 2000 points when sonar measure environment at one time. Measured data has much noise referring to Fig. 2 left side, wall has 40cm depth at most owing to bad accuracy. In this paper, we made mapping system only using 3D point cloud data.

Fig. 1 Measurement environment 2. Filtering Fig. 2 Raw sonar data (left side) and filtered data (right side) We implemented three filters by using PCL (Point Cloud Library)[10] which is 3D point processing library to filter noise from sonar data. 2.1 Statistical outlier removal filter Statistical outlier removal filter removes point which has large point-to-point distance by computing distribution of the distance statistically. In each point, the filter sum up point-to-point distances and compute average value and a standard deviation. To assume distance of points distribute under Gaussian distribution, the filter removes point which has the distance larger than sum of average and n times standard deviation. In this paper, we set n to 0, that is, removing points whose distance is larger than average. 2.2 Time filter Time filter removes unexpected points on time series by comparing former data and current data. It only leaves points of sequence object on time series. It extracts a coordinate of current point cloud

and search nearby points in former point cloud. And if the distance of search point to nearby point is larger than set value, filter remove the research point. 2.3 Smoothing filter In this measuring sonar data, sonar measure the surface of objects as thickness points. Thus we have to smooth the surface. In this paper, we smoothed surface using moving least-squares method [11] computing nearby points within set radius. Using above edited three filters, we processed the data of Fig. 2 left side and made filtered data like Fig. 2 right side. 3. Mapping Mapping is the process which concatenate two clouds by comparing current point cloud to former point cloud. It estimates transformation matrix which correspond the best to former cloud. Concatenating clouds sequentially, we expand the map data. To make mapping system, we apply 3D object recognition process (Fig. 3). Object recognition process has many methods in each step, so we think it is easily to compare many methods. Its process called pipeline. And there are two pipeline, local pipeline and global pipeline. 3.1 Local pipeline Fig. 3 Object recognition pipeline Local pipeline suits to noisily data, and use in segmentation which classify objects and 3D SLAM (Simultaneous Localization and Mapping) and so on. Shown in Fig. 3, local pipeline extract keypoints which have characteristic in whole points first. Second, it describes characteristic around the keypoint in each keypoints. As the method of description, there are process which use normal vector and position vector and so on. Next, it compares the description in former cloud (model cloud) and current cloud (scene cloud) and match the similar description to make correspondences. In correspondence Grouping phase, it estimates rough transformation matrix by taking geometric relationship of correspondences into account. After that, ICP (Iterative closest point) algorithm refine point cloud. In keypoint extraction phase, repeatability is the most important point. According to Silvio and Samuele, ISS (Intrinsic Shape Signature)[12] is the most robust method[13,14] against noise and rotation and changing scale. 3.2 Global pipeline Global pipeline needs perfect surface information such as 3D CAD data. However, it has much good point such as low memory consumption and more descriptive even in object with poor

geometric structure[15]. In this pipeline, description is done in each object, so segmentation step is the first step. After that, description is done in each object and matched. And it estimates rough rotation matrix by matching teaching data. Translation matrix is estimated by corresponding the centroids of objects in scene and model point cloud. This pipeline needs prerequisite that detect known object because we have to prepare teaching data. 3.3 ICP algorithm ICP algorithm is the estimation which don t take into account object geometric characteristics. At first, it makes tentative correspondences by computing distances between scene and model cloud and determining a pairs which have the shortest distance. After that, it estimates the transformation matrix to minimize the sum of distances between the correspondences. Repeating this process, estimate matrix which can match two point clouds. ICP algorithm has various method of how to set distance such as using normal vector method and Generalized-ICP[16]. ICP algorithm needs initial alignment like local and global pipeline because it can fail into a local solution. 4. Implementation and result We implemented mapping system using local and global pipeline. We implemented local pipeline first. However, the positions of Model cloud and scene cloud didn t match in keypoint extraction phase. Using ISS and SIFT (Scale Invariant Feature Transform)[17], NARF (Normal Aligned Radial Feature)[18], Uniform Sampling, Harris[19], but all method couldn t extract match keypoints. We thought this problem cause due to noisy and low density sonar data. Such data can t always detect same point of object, so out of position cause the change of local characteristics. We tried to do after pipeline steps with various methods, but estimated transformation matrix has clearly wrong values such as too big translation matrix in measuring field. There were too many times fails of estimation, we concluded that local pipeline can t use in sonar mapping. Global pipeline can t make teaching data beforehand in this mapping system. So we segment objects and match each objects between scene and model cloud. The coordinate of triplen move in time series, but wall position doesn t move. So we extracted triplen as landmark object and match it in time series. We made process pipeline that it extracts landmark object and match it. After that, final alignment is done by ICP algorithm. To extract landmark object, we use Euclidean segmentation method and count the number of each object. In the condition which has many objects, we think other method like region growing is valid, but in this data, it doesn t need. If the point number of cloud and centroid position are similar in time series, we reckon them as same object. As initial alignment of ICP, we use centroid position of landmark object. After that, we use ICP algorithm taking distance as Euclidean distance. In other ICP methods whose distance is like planeto-plane, estimated transformation matrix has big rotation and scale changing value. Fig. 4 is the result of connecting cloud by initial alignment and ICP with Euclidean distance. As a comparison of above map data, we show another map data without initial alignment(fig. 4). In this map data, the portion of wall is accorded, but triplen data is scattered. We thought correspondences of triplen didn t minimize because there were far more than correspondences in wall and the sum of distances were minimized mostly in portion of wall. If it tries to minimize the distances of correspondences in triplen portion, the sum of distances increase because the alignment in wall portion which has many correspondences would be bad condition. In estimation of ICP without initial alignment, alignment of object which has many points has significant impact to whole estimation. We made mapping system without description of objects which uses only coordinate of points. In general, pointwise description is simple and low memory consumption, but not robust to noise,

Fig. 4 Map data (left side: aligning landmark object before ICP algorithm with Euclidean distance, right side: no aligning before ICP algorithm) and often not descriptive enough[20]. However, considering failure of keypoint extraction and description in local pipeline and result of ICP algorithm which take distances as plane-to-plane, we shouldn t take method which use normal vector and position vector in low resolution data such as sonar data. We will implement these systems on ROS (Robot Operating System) as independent thread. Each thread is asynchronous process and it can realize real-time process. 5. Conclusion We made robust mapping system assuming that using imaging sonar which search limited range. We remove noise data by statistical outlier removal filter and time filter and smoothing filter. We suggested the method which extract landmark object and correspond it in initial alignment and use ICP algorithm. We use Euclidean segmentation as extraction of landmark object. In initial alignment, we use coordinates of centroid of landmark object and estimate translation matrix. In ICP algorithm, the result of method minimizing point to point distance was the best. If we make mapping system in sonar, method using only pointwise description is good. In this paper, we made filtering and mapping system, but improving processing speed and implementation of localization aren t done. In the future, we want to add these functionalities in the system. References [1] J. Tardós, J. Neira, P. Newman, and J. Leonard, Robust Mapping and Localization in Indoor Environments Using Sonar Data, International Journal of Robotics Research, Vol21, pp.311 330, 2002. [2] S. Majumder, J. Rosenblatt, S. Scheding, and H. Durrant-Whyte, Map building and localization for underwater navigation, In Lecture notes in control and information sciences, Vol271, pp.511 520, 2001. [3] S. Majumder, J. Rosenblatt, S. Scheding, and H. Durrant-Whyte, Map building and localization for underwater navigation, In Lecture notes in control and information sciences, Vol271, pp.511 520, 2001.

[4] M. Hammond, and S. Rock, A SLAM-based Approach for Underwater Mapping using AUVs with Poor Inertial Information, Proceedings of the IEEE/OES AUV Conference, 2014. [5] V. Murino, A. Fusiello, N. Iuretigh, and E. Puppo, 3D mosaicing for environment reconstruction Pattern recognition, IEEE Pattern Recognition, 2000. Proceedings. 15th International Conference on, Vol3, pp. 358 362, 2000. [6] K. Pathak, A. Birk, and N. Vaskevicius, Plane-based registration of sonar data for underwater 3D mapping, Intelligent Robots and Systems (IROS) 2010 IEEE/RSJ International Conference on, 2010. [7] H. Bülow, and A. Birk, Spectral registration of noisy sonar data for underwater 3D mapping, Autonomous Robots, Vol30, pp.307 331, 2011. [8] A. Aldoma, Z. Marton, F. Tombari, W. Wohlkinger, C. Potthast, B. Zeisl, R. Rusu, S. Gedikli, and M. Vincze, Point Cloud Library Three-Dimensional Object Recognition and 6 DoF Pose Estimation, IEEE Robotics & Automation Magazine, Vol19, pp.80-91, 2012. [9] T. Guerneve, K. Subr, and Y. Petillot, Underwater 3D Structures as Semantic Landmarks in SONAR Mapping, Intelligent Robots and Systems, 2017. [10] R. Rusu, and S. Cousins, 3D is here: Point Cloud Library (PCL), Robotics and Automation (ICRA), 2011 IEEE International Conference, pp.1-4, 2011 [11] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. Silva, Computing and rendering point set surfaces, IEEE Transactions on Visualization and Computer Graphics, Vol9, pp.3-15, 2003. [12] Y. Zhong, Intrinsic Shape Signatures: A Shape Descriptor for 3D Object Recognition, Computer Vision Workshops (ICCV Workshops), 2009. [13] S. Filipe, and L. Alexandre, A comparative evaluation of 3D keypoint detectors in a RGB-D object dataset, In 9th International Conference on Computer Vision Theory and Applications, pp. 1 8, 2014. [14] S. Salti, F. Tombari, and L. Stefano, A Performance Evaluation of 3D Keypoint Detectors, 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, pp.236 243, 2011. [15] F. Tombari, Keypoints and features, CGLibs Conference in Pisa, pp.303 312, 2013. [16] A. Segal, D. Haehnel, and S. Thrun, Generalized-ICP, in Robotics: Science and Systems, 2009. [17] D. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, Vol60, pp.91-110, 2004. [18] B. Steder, R. B. Rusu, and K. Konolige, Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries, Robotics and Automation (ICRA) IEEE International Conference, 2011. [19] C. Harris, and M. Stephens, A combined corner and edge detector, Alvey Vision Conferenfce, pp.147-152, 1988. [20] F. Tombari, Keypoints and features, CGLibs Conference in Pisa, pp.303 312, 2013.