Cover Page Abstract ID 8181 Paper Title Automated extraction of linear features from vehicle-borne laser data Contact Author Email Dinesh Manandhar (author1) dinesh@skl.iis.u-tokyo.ac.jp Phone +81-3-5452-6417 Fax +81-3-5452-6414 1
AUTOMATED EXTRACTION OF LINEAR FEATURES FROM VEHICLE-BORNE LASER DATA Dinesh MANANDHAR Research Associate Centre for Spatial Information Science The University of Tokyo Japan Tel: +81-3-5452-6417 Fax: +81-3-5452-6414 E-mail: dinesh@skl.iis.u-tokyo.ac.jp Ryosuke SHIBASAKI Professor Centre for Spatial Information Science The University of Tokyo Japan Tel: +81-3-5452-6417 Fax: +81-3-5452-6414 E-mail: shiba@skl.iis.u-tokyo.ac.jp Abstract: In this paper, we focus our discussion on auto-extraction of linear features like guard-rails from vehicle-borne laser data. Radon transformation is applied on binary image created from laser data to identify the seed position and orientation of the most probable linear features. A circle-growing technique is applied on the seed points to correct the seed position of the linear features. Once all the seed points are corrected, straight lines are fitted to represent the linear features on the original laser data. The height of the linear feature is computed by fitting maximum and minimum height values of the laser points that fall inside the circle. This gives us 3-D modeling of linear features. The algorithm is successful in extracting linear features automatically for continuous linear features. If the linear features are non-continuous or data are occluded, auto-extraction will be quite complex and might even fail to identify. In this case, a semi-automated extraction is recommended. Keywords: Laser Scanning, Feature Extraction, Mobile Mapping 1. INTRODUCTION Laser point data scanned from vehicle-borne platform can be used for 3-D modeling of various urban features. Apart from building faces, roads and trees, there are many other features that can be modeled from laser data. Some of these are cables, poles, fence or guardrails, tunnels, vehicles and pedestrians. Refer to Manandhar & Shibasaki (2002) for details on extraction of some of these features. In this paper we are focusing on the possibility of automated extraction of linear features (especially guardrails) from laser data. The laser data have no other information except the range distance. The data are bare 3-D real world coordinates. Figure 1 shows the mapping vehicle equipped with the laser scanning system that provides laser data. Line Cameras Laser Scanner Figure 1: Vehicle-borne Laser Mapping System 1
2. DEFINITION We define linear features that reflect laser points with linear geometry when viewed along the vehicle trajectory (along track). For example, laser points reflected by cables and guardrails etc are defined as linear features. Only a few points are reflected from these objects for a single scan. The density of reflected points depends on across-track resolution of the scanner. Across-track resolution is defined as the successive distance between the laser points within the same scan line. Laser points reflected by poles are not classified as linear features since they exhibit points linearly along the scanning direction (across track), but not along the vehicle trajectory. 3. LINEAR FEATURE EXTRACTION There are different approaches to segment range data for feature extraction. These approaches basically depend on the type of range data and the features we would like to extract. Almost all the segmentation algorithms are developed either for fixed platform range data or for air-borne range data or for industrial applications. Hoover, A., et al. have conducted a detail comparative study of various range image segmentation algorithms. These algorithms are developed for fixed platform and use images from laser range finders or structured light scanners. The pixel values of these images are either the range distance or the intensity values. The natures of these algorithms are analysis of surface primitives, like H-K thresholding (Besl, P. J. et al., 1988, Trucco, E. et al., 1995), scanline division (Jiang, X. Y. et al., 1994), hough transformation (Newman, T. S. et al., 1993), and morphology analysis (Pitas, I., 1991). The vehicle-borne laser data are point-cloud data with irregular distribution. It contains only 3-D coordinates for each reflected laser point and there are no intensity values. The raw data acquired by the laser scanner is geo-referenced and classified into road and non-road data classes. Refer to Manandhar & Shibasaki (2001) for details on classification of data into these two classes. We have used laser point data classified as non-road for extraction of linear features. Figure 2 shows the classified road and non-road laser point data. The road data are shown in red color points and non-road data are shown in blue color points. Figure 2: Road and non-road Classified Laser Points. Red Road Points Blue non-road points 2
The feature extraction is basically done in three major steps, (a) conversion to raster image and image analysis (b) Identify seed points by performing radon transformation and (c) correct seed points / lines by circle growing and (d) fitting the straight lines to corrected points. 3.1 Image Creation and Analysis Raster image is created from point cloud non-road laser data. A blank grid is defined with equal breadth and width grid size. The grid size is fixed at 20cmx20cm. It is not necessary to keep the square grid size. The grid size can be varied based on the laser scanner s along-track resolution (distance between the successive scan lines, which actually depends on vehicle speed). We have found that 20cm grid is effective for our data. The size of the blank grid is defined by the minimum and maximum extents of the x and y coordinates of the laser data. The laser data are projected on the blank grid that is a horizontal plane (x-y plane). We can create different types of images while projecting the laser points on the grid, e.g. density image, maximum height image or average image. Density image shows the number of laser points falling on each grid. This is simply the count of the laser points falling on each grid. Linear features like guardrails, and cables exhibit very low value on this image. Maximum height image shows the maximum height value of each grid. This is created by computing the maximum height of all the points falling on each grid. Building faces will exhibit higher grid value on maximum height image, where as guardrails exhibit lower value on maximum height image as they appear at lower height compared to the building (roof edge of the building). Density image and maximum height images are created for visualization purpose to show the appearance of different features when such images are created from laser point cloud data. Figure 3: Density Image (Number of Laser Points per Grid) Figure 4: Maximum Height Image 3
3.2 Binary Image Creation Binary image is created by filtering the maximum height image with maximum and minimum height threshold values. The threshold values are set based on the definition of the guardrail. The height value of each laser point is normalized before creating the image. The normalization is done by making the road surface height equal to zero. Thus any point that is at a height of one meter from the road surface will have height value one meter. The guardrails generally appear along the roadsides or on the road as well to separate the driving lanes. Normally, guardrails have height of about one meter. Thus we set maximum height threshold value of 1.2m and minimum height threshold of 0.2m. By setting these threshold values, we will be selecting the grids on the image that have values from 0.2m to 1.2m. By changing these threshold values other linear features (like cables) can also be identified, though they need further analysis. Figure 5 shows the binary image. We can see at least two linear features (guardrails) clearly and the third one is also seen but it is not continuous as the other two. Figure 5: Binary Image overlaid with straight lines from radon transformation 3.3 Line Detection by Radon Transformation Radon transformation is used to detect the lines on the binary image. Radon transformation represents an image as a collection of projections along various directions. Projections can be computed along any angle θ. In general, the Radon transform of f(x,y) is the line integral of f parallel to the y axis. It is given by Equation 1 and Equation 2. R ( x ) = f ( x cosθ y sinθ, x sinθ + y cosθ ) + dy (1) θ 4
x cosθ = y sinθ sinθ x cosθ y (2) However, radon transformation simply provides the direction where the straight lines appear. Thus it is not possible to know the actual length of the line segment. It is also not possible to identify the individual lines if the lines fall on the same direction. Thus we select the prominent peaks from the radon image as seed line direction. A threshold value is applied to select the prominent peaks on the radon space. The threshold is set at 50% of the maximum peak value. The radon space is filtered using this threshold value to select only the candidate peaks. A morphological dilation operation is then applied to remove the neighboring small peaks with disk structuring element of radius one pixel. Dilated (eroded) value at a pixel x is the maximum (minimum) value of the image in the window defined by the structuring element when it s origin is at x. This is expressed mathematically by Equation 3. The structuring element, S is given by Equation 4. A S = { x ( S I A) φ} i (3) x 0 1 0 S = 1 1 1 (4) 0 1 0 Figure 6 shows radon transform of the binary image shown in Figure 5. Figure 7 shows the result of morphological operation of radon image to select only the peak values. These peak values are taken as the orientation of major linear features on the image. The peaks thus identified are used to generate candidate straight lines. These straight lines are plotted over the binary image as shown in Figure 5. Figure 6: Radon Transformation of the Binary Image Figure 7: Peaks Selection by Morphological Operation 5
3.4 Correction of Identified Linear Features The straight lines detected by radon transform indicate only the orientation of lines on the image. It does not show the true segment or shape. The peak on the radon image is due to the longest line section on the image. If we have multiple line segments at the same orientation, we get only one peak in the radon space. We need to further analyze the identified lines for true orientation and segment or length. This is accomplished by using circle growing. Circle growing analysis is done to see whether the laser points correspond to every section of the line segment. The identified lines are projected back on the laser data filtered with height threshold values (for guardrail). The point coordinate corresponding to the peak of the identified line is taken as the initial seed point for circle growing. Circles of radius 20cm (grid size of raster image) are grown at every line section till we get some laser points inside the circle. The circle grown is terminated if no laser points are found when the radius has grown to two meter. This indicates that there is no line segment at this point or the linear feature is not continuous. This enables us to trace the line segments that are not exactly straight. A radius of two meter corresponds to a search radius of five pixels on either side of the line / point on the image. Once the laser points are found inside the circle, the growth is checked and the mean of the x and y coordinates are taken as the new point (on the new line segment). Minimum and maximum height values of the laser points that fall inside the circle are also computed. This is performed for every line segment. The single line identified from radon transform is now divided into several segments, depending on the circle radius. Figure 8 shows the results of circle growing. Figure 8: Circle Growing on Identified Line from Radon Transformation with Laser Point Data. 6
Line segments having the same circle radius are grouped together and forms one single segment. The line generated by connecting these points may not be a straight line. So, we perform a robust straight-line (2-D) fitting. The robust fit uses an iteratively re-weighted least squares algorithm. The weight for each iteration is calculated by applying the bi-square function to the residuals from the previous iteration. This algorithm gives lower weight to points that do not fit well. The results are less sensitive to outliers in the data as compared with ordinary least squares regression. Robust fitting is also applied to maximum and minimum height data separately. Thus we get fitted x, y, z min and z max coordinates for each line segment. Using these coordinates, 3-D patches are created to represent the guardrails from the vehicle-borne laser data. The final result is shown in figure 9. Figure 9: 3-D Model of linear feature (Guardrail, shown in Blue Patches) extracted automatically from laser point data. The feature is overlaid with laser point data for verification. 4. CONCLUSION It is possible to identify linear features from vehicle-borne laser data. The algorithm is successful in extracting the linear features automatically for continuous linear features. If the linear features are non-continuous (or smaller spans of a few meters) or data are occluded, auto-extraction will be quite complex and might even fail to identify. In this case, a semi-automated extraction is recommended. The data in reality have both continuous and non-continuous linear features. Thus the extraction of all linear features automatically is only partially successful. However, the algorithms can be used to identify the possible linear features in semi-automated process where the user needs to identify laser points that are reflected by the linear features. This will reduce the operation time to some extent or ease the manual operation. This algorithm with some improvement can also be used to identify cables automatically. In case of identifying cables, suitable threshold values for height should be assigned (e.g. 6.0m for minimum height and 12.0m for maximum height). We have some preliminary results for identifying cables as well. The algorithm is undergoing further development for robustness and handling the extraction procedure more interactively with the user had the semi-automated process is necessary. 7
REFERENCES a) Journal Papers Bose, S.K., Biswas, K.K., Gupta, S. K. (1996), An Integrated approach for range image segmentation and representation, Artificial Intelligence in Engineering 1, 243-252 Pitas, I., Maglara, A. (1991), Range Image Analysis by using morphological signal decomposition, Pattern Recognition, Vol. 24, No. 2, pp. 165-181 b) Conference Papers Manandhar, D., Shaibasaki, R. (2002), Auto-extraction of Urban Features from Vehicle-borne Laser Data, IAPRS, Vol. 34, Part 4, Geospatial Theory, Processing and Applications, Ottawa, 2002 Manandhar, D., Shaibasaki, R. (2001), Feature Extraction from Range Data, Proceedings of ACRS 2001 22 nd Asian Conference on Remote Sensing, Singapore, Vol. 2, pp 1113 1118, 5-9 November 2001 c) Other Documents Hoover, A., Jean-Baptiste, G., Jiang X., J., Flynn, P.J., Bunke H., Goldgof, D., Bowyer K., A Comparison of Range Image Segmentation Algorithm, URL: http://marathon.csee.usf.edu/range/seg-comp/segcomp.html Matlab Manual on Optimization, Matlab Software, www.mathworks.com 8