Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

Similar documents
FEATURE EXTRACTION FROM RANGE DATA

RECOGNISING STRUCTURE IN LASER SCANNER POINT CLOUDS 1

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

Segmentation of point clouds

WAVELET AND SCALE-SPACE THEORY IN SEGMENTATION OF AIRBORNE LASER SCANNER DATA

Model-based segmentation and recognition from range data

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING

AUTOMATIC RAILWAY POWER LINE EXTRACTION USING MOBILE LASER SCANNING DATA

AUTOMATIC EXTRACTION OF ROAD MARKINGS FROM MOBILE LASER SCANNING DATA

DEFORMATION DETECTION IN PIPING INSTALLATIONS USING PROFILING TECHNIQUES

Identifying and Reading Visual Code Markers

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

Triangular Mesh Segmentation Based On Surface Normal

Advanced point cloud processing

OBJECT detection in general has many applications

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA

SEGMENTATION AND CLASSIFICATION OF POINT CLOUDS FROM DENSE AERIAL IMAGE MATCHING

Unwrapping of Urban Surface Models

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA

Skew Detection and Correction of Document Image using Hough Transform Method

ROAD SURFACE STRUCTURE MONITORING AND ANALYSIS USING HIGH PRECISION GPS MOBILE MEASUREMENT SYSTEMS (MMS)

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

Restoring Warped Document Image Based on Text Line Correction

Auto-Digitizer for Fast Graph-to-Data Conversion

Extracting Road Signs using the Color Information

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA

HOUGH TRANSFORM CS 6350 C V

Lecture 9: Hough Transform and Thresholding base Segmentation

AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator

Pedestrian Detection Using Multi-layer LIDAR

POINT CLOUD ANALYSIS FOR ROAD PAVEMENTS IN BAD CONDITIONS INTRODUCTION

Identifying man-made objects along urban road corridors from mobile LiDAR data

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

LASERDATA LIS build your own bundle! LIS Pro 3D LIS 3.0 NEW! BETA AVAILABLE! LIS Road Modeller. LIS Orientation. LIS Geology.

Signature Recognition by Pixel Variance Analysis Using Multiple Morphological Dilations

INTEGRATED METHOD OF BUILDING EXTRACTION FROM DIGITAL SURFACE MODEL AND IMAGERY

GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

INCREASING CLASSIFICATION QUALITY BY USING FUZZY LOGIC

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

The Reference Library Generating Low Confidence Polygons

Building Segmentation and Regularization from Raw Lidar Data INTRODUCTION

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor

Object Dimension Inspection Utilizing 3D Laser Scanner

EE368 Project: Visual Code Marker Detection

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

FOOTPRINTS EXTRACTION

SEGMENTATION OF TIN-STRUCTURED SURFACE MODELS

Image Registration for Volume Measurement in 3D Range Data

DETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA. Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI

Hypothesis Generation of Instances of Road Signs in Color Imagery Captured by Mobile Mapping Systems

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA

MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION

Chapters 1 7: Overview

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

EE 584 MACHINE VISION

SKEW DETECTION AND CORRECTION

COMPUTER AND ROBOT VISION

British Machine Vision Conference 2 The established approach for automatic model construction begins by taking surface measurements from a number of v

Experiments on Generation of 3D Virtual Geographic Environment Based on Laser Scanning Technique

TerraScan New Features

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

2 OVERVIEW OF RELATED WORK

IRIS SEGMENTATION OF NON-IDEAL IMAGES

Complex Numbers, Polar Equations, and Parametric Equations. Copyright 2017, 2013, 2009 Pearson Education, Inc.

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING

Lane Markers Detection based on Consecutive Threshold Segmentation

Robot Localization based on Geo-referenced Images and G raphic Methods

To graph the point (r, θ), simply go out r units along the initial ray, then rotate through the angle θ. The point (1, 5π 6

Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction

EE795: Computer Vision and Intelligent Systems

LaserGuard LG300 area alarm system. 3D laser radar alarm system for motion control and alarm applications. Instruction manual

A METHOD OF MAP MATCHING FOR PERSONAL POSITIONING SYSTEMS

Segmentation of Architecture Shape Information from 3D Point Cloud

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

New Features in TerraScan. Arttu Soininen Software developer Terrasolid Ltd

Automatic DTM Extraction from Dense Raw LIDAR Data in Urban Areas

1 Background and Introduction 2. 2 Assessment 2

Research on-board LIDAR point cloud data pretreatment

Geometrical Feature Extraction Using 2D Range Scanner

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY

Direction Fields; Euler s Method

Topic 6 Representation and Description

Subpixel Corner Detection Using Spatial Moment 1)

CS 223B Computer Vision Problem Set 3

AUTOMATIC PROCESSING OF MOBILE LASER SCANNER POINT CLOUDS FOR BUILDING FAÇADE DETECTION

Comparative Study of ROI Extraction of Palmprint

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Mobile mapping system and computing methods for modelling of road environment

CS 231A Computer Vision (Fall 2012) Problem Set 3

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

Circular Hough Transform

Motion Detection Algorithm

Transcription:

Cover Page Abstract ID 8181 Paper Title Automated extraction of linear features from vehicle-borne laser data Contact Author Email Dinesh Manandhar (author1) dinesh@skl.iis.u-tokyo.ac.jp Phone +81-3-5452-6417 Fax +81-3-5452-6414 1

AUTOMATED EXTRACTION OF LINEAR FEATURES FROM VEHICLE-BORNE LASER DATA Dinesh MANANDHAR Research Associate Centre for Spatial Information Science The University of Tokyo Japan Tel: +81-3-5452-6417 Fax: +81-3-5452-6414 E-mail: dinesh@skl.iis.u-tokyo.ac.jp Ryosuke SHIBASAKI Professor Centre for Spatial Information Science The University of Tokyo Japan Tel: +81-3-5452-6417 Fax: +81-3-5452-6414 E-mail: shiba@skl.iis.u-tokyo.ac.jp Abstract: In this paper, we focus our discussion on auto-extraction of linear features like guard-rails from vehicle-borne laser data. Radon transformation is applied on binary image created from laser data to identify the seed position and orientation of the most probable linear features. A circle-growing technique is applied on the seed points to correct the seed position of the linear features. Once all the seed points are corrected, straight lines are fitted to represent the linear features on the original laser data. The height of the linear feature is computed by fitting maximum and minimum height values of the laser points that fall inside the circle. This gives us 3-D modeling of linear features. The algorithm is successful in extracting linear features automatically for continuous linear features. If the linear features are non-continuous or data are occluded, auto-extraction will be quite complex and might even fail to identify. In this case, a semi-automated extraction is recommended. Keywords: Laser Scanning, Feature Extraction, Mobile Mapping 1. INTRODUCTION Laser point data scanned from vehicle-borne platform can be used for 3-D modeling of various urban features. Apart from building faces, roads and trees, there are many other features that can be modeled from laser data. Some of these are cables, poles, fence or guardrails, tunnels, vehicles and pedestrians. Refer to Manandhar & Shibasaki (2002) for details on extraction of some of these features. In this paper we are focusing on the possibility of automated extraction of linear features (especially guardrails) from laser data. The laser data have no other information except the range distance. The data are bare 3-D real world coordinates. Figure 1 shows the mapping vehicle equipped with the laser scanning system that provides laser data. Line Cameras Laser Scanner Figure 1: Vehicle-borne Laser Mapping System 1

2. DEFINITION We define linear features that reflect laser points with linear geometry when viewed along the vehicle trajectory (along track). For example, laser points reflected by cables and guardrails etc are defined as linear features. Only a few points are reflected from these objects for a single scan. The density of reflected points depends on across-track resolution of the scanner. Across-track resolution is defined as the successive distance between the laser points within the same scan line. Laser points reflected by poles are not classified as linear features since they exhibit points linearly along the scanning direction (across track), but not along the vehicle trajectory. 3. LINEAR FEATURE EXTRACTION There are different approaches to segment range data for feature extraction. These approaches basically depend on the type of range data and the features we would like to extract. Almost all the segmentation algorithms are developed either for fixed platform range data or for air-borne range data or for industrial applications. Hoover, A., et al. have conducted a detail comparative study of various range image segmentation algorithms. These algorithms are developed for fixed platform and use images from laser range finders or structured light scanners. The pixel values of these images are either the range distance or the intensity values. The natures of these algorithms are analysis of surface primitives, like H-K thresholding (Besl, P. J. et al., 1988, Trucco, E. et al., 1995), scanline division (Jiang, X. Y. et al., 1994), hough transformation (Newman, T. S. et al., 1993), and morphology analysis (Pitas, I., 1991). The vehicle-borne laser data are point-cloud data with irregular distribution. It contains only 3-D coordinates for each reflected laser point and there are no intensity values. The raw data acquired by the laser scanner is geo-referenced and classified into road and non-road data classes. Refer to Manandhar & Shibasaki (2001) for details on classification of data into these two classes. We have used laser point data classified as non-road for extraction of linear features. Figure 2 shows the classified road and non-road laser point data. The road data are shown in red color points and non-road data are shown in blue color points. Figure 2: Road and non-road Classified Laser Points. Red Road Points Blue non-road points 2

The feature extraction is basically done in three major steps, (a) conversion to raster image and image analysis (b) Identify seed points by performing radon transformation and (c) correct seed points / lines by circle growing and (d) fitting the straight lines to corrected points. 3.1 Image Creation and Analysis Raster image is created from point cloud non-road laser data. A blank grid is defined with equal breadth and width grid size. The grid size is fixed at 20cmx20cm. It is not necessary to keep the square grid size. The grid size can be varied based on the laser scanner s along-track resolution (distance between the successive scan lines, which actually depends on vehicle speed). We have found that 20cm grid is effective for our data. The size of the blank grid is defined by the minimum and maximum extents of the x and y coordinates of the laser data. The laser data are projected on the blank grid that is a horizontal plane (x-y plane). We can create different types of images while projecting the laser points on the grid, e.g. density image, maximum height image or average image. Density image shows the number of laser points falling on each grid. This is simply the count of the laser points falling on each grid. Linear features like guardrails, and cables exhibit very low value on this image. Maximum height image shows the maximum height value of each grid. This is created by computing the maximum height of all the points falling on each grid. Building faces will exhibit higher grid value on maximum height image, where as guardrails exhibit lower value on maximum height image as they appear at lower height compared to the building (roof edge of the building). Density image and maximum height images are created for visualization purpose to show the appearance of different features when such images are created from laser point cloud data. Figure 3: Density Image (Number of Laser Points per Grid) Figure 4: Maximum Height Image 3

3.2 Binary Image Creation Binary image is created by filtering the maximum height image with maximum and minimum height threshold values. The threshold values are set based on the definition of the guardrail. The height value of each laser point is normalized before creating the image. The normalization is done by making the road surface height equal to zero. Thus any point that is at a height of one meter from the road surface will have height value one meter. The guardrails generally appear along the roadsides or on the road as well to separate the driving lanes. Normally, guardrails have height of about one meter. Thus we set maximum height threshold value of 1.2m and minimum height threshold of 0.2m. By setting these threshold values, we will be selecting the grids on the image that have values from 0.2m to 1.2m. By changing these threshold values other linear features (like cables) can also be identified, though they need further analysis. Figure 5 shows the binary image. We can see at least two linear features (guardrails) clearly and the third one is also seen but it is not continuous as the other two. Figure 5: Binary Image overlaid with straight lines from radon transformation 3.3 Line Detection by Radon Transformation Radon transformation is used to detect the lines on the binary image. Radon transformation represents an image as a collection of projections along various directions. Projections can be computed along any angle θ. In general, the Radon transform of f(x,y) is the line integral of f parallel to the y axis. It is given by Equation 1 and Equation 2. R ( x ) = f ( x cosθ y sinθ, x sinθ + y cosθ ) + dy (1) θ 4

x cosθ = y sinθ sinθ x cosθ y (2) However, radon transformation simply provides the direction where the straight lines appear. Thus it is not possible to know the actual length of the line segment. It is also not possible to identify the individual lines if the lines fall on the same direction. Thus we select the prominent peaks from the radon image as seed line direction. A threshold value is applied to select the prominent peaks on the radon space. The threshold is set at 50% of the maximum peak value. The radon space is filtered using this threshold value to select only the candidate peaks. A morphological dilation operation is then applied to remove the neighboring small peaks with disk structuring element of radius one pixel. Dilated (eroded) value at a pixel x is the maximum (minimum) value of the image in the window defined by the structuring element when it s origin is at x. This is expressed mathematically by Equation 3. The structuring element, S is given by Equation 4. A S = { x ( S I A) φ} i (3) x 0 1 0 S = 1 1 1 (4) 0 1 0 Figure 6 shows radon transform of the binary image shown in Figure 5. Figure 7 shows the result of morphological operation of radon image to select only the peak values. These peak values are taken as the orientation of major linear features on the image. The peaks thus identified are used to generate candidate straight lines. These straight lines are plotted over the binary image as shown in Figure 5. Figure 6: Radon Transformation of the Binary Image Figure 7: Peaks Selection by Morphological Operation 5

3.4 Correction of Identified Linear Features The straight lines detected by radon transform indicate only the orientation of lines on the image. It does not show the true segment or shape. The peak on the radon image is due to the longest line section on the image. If we have multiple line segments at the same orientation, we get only one peak in the radon space. We need to further analyze the identified lines for true orientation and segment or length. This is accomplished by using circle growing. Circle growing analysis is done to see whether the laser points correspond to every section of the line segment. The identified lines are projected back on the laser data filtered with height threshold values (for guardrail). The point coordinate corresponding to the peak of the identified line is taken as the initial seed point for circle growing. Circles of radius 20cm (grid size of raster image) are grown at every line section till we get some laser points inside the circle. The circle grown is terminated if no laser points are found when the radius has grown to two meter. This indicates that there is no line segment at this point or the linear feature is not continuous. This enables us to trace the line segments that are not exactly straight. A radius of two meter corresponds to a search radius of five pixels on either side of the line / point on the image. Once the laser points are found inside the circle, the growth is checked and the mean of the x and y coordinates are taken as the new point (on the new line segment). Minimum and maximum height values of the laser points that fall inside the circle are also computed. This is performed for every line segment. The single line identified from radon transform is now divided into several segments, depending on the circle radius. Figure 8 shows the results of circle growing. Figure 8: Circle Growing on Identified Line from Radon Transformation with Laser Point Data. 6

Line segments having the same circle radius are grouped together and forms one single segment. The line generated by connecting these points may not be a straight line. So, we perform a robust straight-line (2-D) fitting. The robust fit uses an iteratively re-weighted least squares algorithm. The weight for each iteration is calculated by applying the bi-square function to the residuals from the previous iteration. This algorithm gives lower weight to points that do not fit well. The results are less sensitive to outliers in the data as compared with ordinary least squares regression. Robust fitting is also applied to maximum and minimum height data separately. Thus we get fitted x, y, z min and z max coordinates for each line segment. Using these coordinates, 3-D patches are created to represent the guardrails from the vehicle-borne laser data. The final result is shown in figure 9. Figure 9: 3-D Model of linear feature (Guardrail, shown in Blue Patches) extracted automatically from laser point data. The feature is overlaid with laser point data for verification. 4. CONCLUSION It is possible to identify linear features from vehicle-borne laser data. The algorithm is successful in extracting the linear features automatically for continuous linear features. If the linear features are non-continuous (or smaller spans of a few meters) or data are occluded, auto-extraction will be quite complex and might even fail to identify. In this case, a semi-automated extraction is recommended. The data in reality have both continuous and non-continuous linear features. Thus the extraction of all linear features automatically is only partially successful. However, the algorithms can be used to identify the possible linear features in semi-automated process where the user needs to identify laser points that are reflected by the linear features. This will reduce the operation time to some extent or ease the manual operation. This algorithm with some improvement can also be used to identify cables automatically. In case of identifying cables, suitable threshold values for height should be assigned (e.g. 6.0m for minimum height and 12.0m for maximum height). We have some preliminary results for identifying cables as well. The algorithm is undergoing further development for robustness and handling the extraction procedure more interactively with the user had the semi-automated process is necessary. 7

REFERENCES a) Journal Papers Bose, S.K., Biswas, K.K., Gupta, S. K. (1996), An Integrated approach for range image segmentation and representation, Artificial Intelligence in Engineering 1, 243-252 Pitas, I., Maglara, A. (1991), Range Image Analysis by using morphological signal decomposition, Pattern Recognition, Vol. 24, No. 2, pp. 165-181 b) Conference Papers Manandhar, D., Shaibasaki, R. (2002), Auto-extraction of Urban Features from Vehicle-borne Laser Data, IAPRS, Vol. 34, Part 4, Geospatial Theory, Processing and Applications, Ottawa, 2002 Manandhar, D., Shaibasaki, R. (2001), Feature Extraction from Range Data, Proceedings of ACRS 2001 22 nd Asian Conference on Remote Sensing, Singapore, Vol. 2, pp 1113 1118, 5-9 November 2001 c) Other Documents Hoover, A., Jean-Baptiste, G., Jiang X., J., Flynn, P.J., Bunke H., Goldgof, D., Bowyer K., A Comparison of Range Image Segmentation Algorithm, URL: http://marathon.csee.usf.edu/range/seg-comp/segcomp.html Matlab Manual on Optimization, Matlab Software, www.mathworks.com 8