Cluster Analysis and Priority Sorting in Huge Point Clouds for Building Reconstruction

Similar documents
REFINEMENT OF BUILDING FASSADES BY INTEGRATED PROCESSING OF LIDAR AND IMAGE DATA

AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA

SEMANTIC FEATURE BASED REGISTRATION OF TERRESTRIAL POINT CLOUDS

PRE-CLASSIFICATION OF POINTS AND SEGMENTATION OF URBAN OBJECTS BY SCAN LINE ANALYSIS OF AIRBORNE LIDAR DATA

Extraction of façades with window information from oblique view airborne laser scanning point clouds

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES

AN ADAPTIVE APPROACH FOR SEGMENTATION OF 3D LASER POINT CLOUD

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA

Polyhedral Building Model from Airborne Laser Scanning Data**

Manhattan-World Assumption for As-built Modeling Industrial Plant

GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES

Cell Decomposition for Building Model Generation at Different Scales

AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS

EXTENDED GAUSSIAN IMAGES FOR THE REGISTRATION OF TERRESTRIAL SCAN DATA

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

AUTOMATIC PROCESSING OF MOBILE LASER SCANNER POINT CLOUDS FOR BUILDING FAÇADE DETECTION

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA

Uncertainties: Representation and Propagation & Line Extraction from Range data

AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

Unwrapping of Urban Surface Models

arxiv: v1 [cs.cv] 28 Sep 2018

Association-Matrix-Based Sample Consensus Approach for Automated Registration of Terrestrial Laser Scans Using Linear Features

3D SPATIAL DATA ACQUISITION AND MODELING OF ANGHEL SALIGNY MONUMENT USING TERRESTRIAL LASER SCANNING

Prof. Jose L. Flores, MS, PS Dept. of Civil Engineering & Surveying

City-Modeling. Detecting and Reconstructing Buildings from Aerial Images and LIDAR Data

Plane Based Free Stationing for Building Models

FOOTPRINTS EXTRACTION

A Summary of Projective Geometry

BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS INTRODUCTION

Robotics Programming Laboratory

Homographies and RANSAC

3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS

SINGLE IMAGE ORIENTATION USING LINEAR FEATURES AUTOMATICALLY EXTRACTED FROM DIGITAL IMAGES

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data

Model-based segmentation and recognition from range data

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Ransac Based Out-of-Core Point-Cloud Shape Detection for City-Modeling

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

BUILDING EXTRACTION AND RECONSTRUCTION FROM LIDAR DATA. Zheng Wang. EarthData International Gaithersburg, Maryland USA

Automatic registration of terrestrial laser scans for geological deformation monitoring

Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method

A STUDY ON ROOF POINT EXTRACTION BASED ON ROBUST ESTIMATION FROM AIRBORNE LIDAR DATA

Identifying man-made objects along urban road corridors from mobile LiDAR data

Advanced point cloud processing

Instance-level recognition part 2

Research on-board LIDAR point cloud data pretreatment

International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998

3D BUILDINGS MODELLING BASED ON A COMBINATION OF TECHNIQUES AND METHODOLOGIES

AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS

Motion Tracking and Event Understanding in Video Sequences

Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz Universität Hannover

Building Reliable 2D Maps from 3D Features

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA

Cover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data

Plane Detection in Point Cloud Data

Model Based Perspective Inversion

Advanced 3D-Data Structures

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

Viewpoint Invariant Features from Single Images Using 3D Geometry

Map-Enhanced UAV Image Sequence Registration and Synchronization of Multiple Image Sequences

REGISTRATION OF AIRBORNE LASER DATA WITH ONE AERIAL IMAGE

Urban Scene Segmentation, Recognition and Remodeling. Part III. Jinglu Wang 11/24/2016 ACCV 2016 TUTORIAL

DEVELOPMENT OF A ROBUST IMAGE MOSAICKING METHOD FOR SMALL UNMANNED AERIAL VEHICLE

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Digital Surface Models for Building Extraction

RECOGNISING STRUCTURE IN LASER SCANNER POINT CLOUDS 1

Processing 3D Surface Data

포인트클라우드파일의측점재배치를통한파일참조옥트리의성능향상

FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS

URBAN STRUCTURE ESTIMATION USING PARALLEL AND ORTHOGONAL LINES

Automatic Building Extrusion from a TIN model Using LiDAR and Ordnance Survey Landline Data

From Structure-from-Motion Point Clouds to Fast Location Recognition

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results

PLANE-BASED COARSE REGISTRATION OF 3D POINT CLOUDS WITH 4D MODELS

CE 59700: LASER SCANNING

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

Outline of Presentation. Introduction to Overwatch Geospatial Software Feature Analyst and LIDAR Analyst Software

Photo Tourism: Exploring Photo Collections in 3D

BUILDING BOUNDARY EXTRACTION FROM HIGH RESOLUTION IMAGERY AND LIDAR DATA

SEGMENTATION OF TIN-STRUCTURED SURFACE MODELS

RANSAC APPROACH FOR AUTOMATED REGISTRATION OF TERRESTRIAL LASER SCANS USING LINEAR FEATURES

THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA

LASER SCANNING AND PHOTOGRAMMETRY: A HYBRID APPROACH FOR HERITAGE DOCUMENTATION

3D Visualization through Planar Pattern Based Augmented Reality

Improvement of the Edge-based Morphological (EM) method for lidar data filtering

MULTI-IMAGE FUSION FOR OCCLUSION-FREE FAÇADE TEXTURING

IGTF 2016 Fort Worth, TX, April 11-15, 2016 Submission 149

AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

Observations. Basic iteration Line estimated from 2 inliers

SEGMENTATION AND CLASSIFICATION OF POINT CLOUDS FROM DENSE AERIAL IMAGE MATCHING

EVALUATION OF MULTIPLE-DOMAIN IMAGERY MATCHING BASED ON DIFFERENT FEATURE SPACES INTRODUCTION

Efficient Multi-Resolution Plane Segmentation of 3D Point Clouds

CS 664 Image Matching and Robust Fitting. Daniel Huttenlocher

Transcription:

Cluster Analysis and Priority Sorting in Huge Point Clouds for Building Reconstruction Wolfgang von Hansen Eckart Michaelsen Ulrich Thönnessen FGAN-FOM, Gutleuthausstr. 1, Ettlingen, Germany E-mail: wvhansen@fom.fgan.de Abstract Terrestrial laser scanners produce point clouds with a huge number of points within a very limited surrounding. In built-up areas, many of the man-made objects are dominated by planar surfaces. We introduce a RANSAC based preprocessing technique that transforms the irregular point cloud into a set of locally delimited surface patches in order to reduce the amount of data and to achieve a higher level of abstraction. In a second step, the resulting patches are grouped to large planes while ignoring small and irrelevant structures. The approach is tested with a dataset of a builtup area which is described very well needing only a small number of geometric primitives. The grouping emphasizes man-made structures and could be used as a preclassification. 1. Introduction In addition to photogrammetric imaging, laser scanning has become a major data source for the acquisition of 3D city models for tourist information or the documentation of cultural heritage. Airborne systems are widely used but also terrestrial laser scanners are increasingly available. They provide a much higher geometrical resolution and accuracy (mm vs. dm) and are able to acquire building facade details which is a requirement for realistic virtual worlds. However, the operating area is limited due to obstruction and by the maximum range of the laser beam. It is not possible to capture an extended area from one position alone, leading to typical raw datasets consisting of several overlapping, huge point clouds. The overall objectives are to generalize the point cloud to higher level geometric primitives and group them to objects for data reduction and as preparation for high level cognitive tasks. fuse the datasets and coregister them into a single geometric reference frame using these primitives. While automatic matching of multiple point clouds still is a topic of research [2, 6], different approaches for segmentation and representation through geometric primitives exist. Segmentation techniques include clustering based on local surface normal analysis [1, 6], region growing using scan geometry and point neighborhoods [2], generation of a discrete grid [8], or a split-and-merge scheme applying an octree structure [9]. The objects are often represented by planar elements recovered through RANSAC schemes [1], creation of a triangular irregular network [4], tensor voting [8] or least squares adjustment [9]. The tensor voting scheme is able to describe not only planes, but also linear structures like high-voltage lines. In this paper, a preprocessing technique is presented, that transforms the point cloud into a set of object surfaces via a two stage process. The first stage is a RANSAC based generation of locally delimited surface patches from the point cloud. The second stage groups patches belonging to the same surface in object space. 2. Generation of surface patch elements The input is a cloud of 3D measurements gathered by a terrestrial laser scanner and locally delimited planes shall be extracted as surface patches. These may represent a part of larger, planar objects but may as well coincide with small object surfaces. The transformation is split up into two subprocesses, a partitioning of the point cloud into spatial bins and the robust estimation of the dominant plane in each bin. 2.1. Spatial data partitioning The set of 3D points is partitioned and assigned to 3D volume cells using a Cartesian raster. All points in one of the raster cells will be denoted by X. Two objectives are satisfied by this partitioning: The raw point cloud is organized into small blocks that can be processed seperately. Since each cell only covers a small part of the complete scene, it emphasizes local features in object space.

Figure 1. Panoramic image of the test area showing the amount of reflected light as grayvalues. Since the laser scanning device is set up roughly aligned to the horizontal plane, there exists an implicit alignment between cells and the world coordinate system. We have to deal with a quantization problem: On the one hand the raster should be chosen as small that any important object surface dominates at least one cell. On the other hand, the cells should not be too small because unwanted microstructure would dominate the results. For the experiments, a raster of 1 m was chosen in order to describe facade and roof surfaces while safely ignoring small structures. 2.2. Robust estimation of plane patches The second stage is the robust estimation of the dominant plane the one that has the biggest support from the 3D points X independently for each of the raster cells. Only one plane is computed for each cell which seems counterintuitive, but if the cell size is chosen such that almost every significant object surface will be the dominant plane in at least one cell, no information about the object is lost. Robust estimation implies a result free of outliers in our case this involves not only the estimation of the plane parameters p := (n, d) (Eq. 2), but also of the set of points ˆX X lying on the plane. We have implemented it using the well known RANSAC strategy [3]: A minimum set of three non colinear points X i X is chosen randomly and the uniquely defined plane p i is computed. Then for all points x j, their distance d ji to the plane p i is computed, defining the set of inliers of run i as ˆX i := {x j X d ji d max, j = 1... X } (1) These steps are repeated, while the largest inlier set ˆX is retained as result. In a final step, the final plane parameters ˆp are estimated from all points in ˆX. The plane represented by the Hesse normal form ax + by + cz + d = n x + d = 0 (2) has an infinite extent. We are interested in a small and delimited plane representing the points ˆX only. Therefore, in addition to the normal vector n and the distance d to the origin, the mean o of the point cloud ˆX is stored as well. For the visualization, all points ˆX are projected orthogonally onto ˆp thus forming a 2D point set ˆx. The convex hull of ˆx is determined and its (planar) polygon transformed back into 3D space as the boundary of the plane element. Similar to the 3D data partitioning, the 2D points are partitioned and assigned to a 2D raster in order to compute a texture image. Some of the 2D cells may remain empty because the point cloud is sparse or contains holes. A filling factor λ is computed and will be used as weight for the grouping step. Subsequently, such planes are called patch and denoted by P. 3. Grouping of planes Each patch instance P has the attributes (n, d, o) assigned with it. Regarding coplanarity as a major property of man-made objects, we group these instances into planar cluster objects C C P(P) P i, P j C : n i o j d i, (3) where P(P) is the power set of P and means approximately equal. Given such an object instance C we again assign attributes (ˆn, ˆd, ô) to it: ô by averaging over {o i }, ˆn as null space of the set {o i ô} and ˆd = ô ˆn. The set of all objects {C} is a subset of P(P). We propose to list only the few most important in a priority ordered sequence. First criterion for the importance of an object C is its size C. Second criterion is its 2D property captured by the ratio between the third and the second singular value s 3 /s 2 acquired during null space determination. As combined criterion we choose c := ln(s 3 /s 2 ) C. (4)

Figure 2. 3D model of the main building. The search for an approximation of this ordered list of objects C is fostered by a sensible assessment criterion on the objects P for which we utilize the filling factor λ (Sec. 2.2). The list of all patch elements P is prioritized according to λ. The production of cluster objects is started by picking seed objects with the highest priority. For each such object P i the set of possible partners {P j } is queried for n i n j < t n n i o j d j < t 0. (5) The maximal set of objects {P j } fulfilling this aggregating constraint is used for construction of a new object C. We emphasize that this procedure does not exactly give the set of cluster objects defined by Eq. 3: It suppresses subsets of bigger clusters, and it does not really test for n j o i d i. The similarity condition for n j and n i is used for faster query handling and the simple threshold t 0 for in the coplanarity query is a rough approximation being aware of work such as [5]. For t 0 we used 6 cm here, which is double of what has been used in Eq. 1. The approach has a bias listing more dominant and important objects first, so that the search can be interrupted by any external demand and then output the set of objects C obtained so far. There will be many similar objects in the output set, differing in very few predecessors only, due to straight forward control structure. This motivates a second ordering procedure operating on the output set of the first. It computes the criterion c i (Eq. 4) for each object C i and picks the object with maximal c first. From this object the attribute vector (n i, d i, o i ) is taken to compute a suppressing factor f ij = 1 e ni nj 2 λ d i d j 2 γ o i o j 2. (6) Figure 3. Remaining major planes from Fig. 2. Before picking the next element this factor is multiplied to all the assessments c j resulting in suppression of objects similar to the first one. We see close relationship to perceptual grouping discussed in the computer vision community. [7] Coplanarity can be viewed as straightforward generalization of colinearity to just one dimension more. 4. Experiments and results The investigated dataset has been acquired by a terrestrial laser scanner Z+F Imager 5003. The chosen scene (Fig. 1) is a courtyard surrounded by several buildings at various distances. The reflected light from the laser beam is recorded as a grayvalue and is used for texturing. Because no light is returned from the open sky, it appears black. This dataset contains about 100 million 3D points and covers a spherical area with a radius of 50 m. The extracted planes are shown for two different viewpoints in the scene (Figs. 2/4). Information is missing at some object plane boundaries because of the restriction of only one plane per raster cell. Parts of the roof structure in Fig. 2 are missing due to obstruction through eaves gutter and pergola (cf. Fig. 1). The empty circular area on the ground gives a good hint for the position of the scanner. The tree (Fig. 4) has been modeled quite well, even though the approach demands a description through planes only. The grouping lead to 24 object surfaces shown in different colors (Figs. 3/5). Only man-made structures are left over while the tree and other smaller objects have been successfully removed.

Figure 5. Remaining major planes from Fig. 4. References Figure 4. Example of a natural object (tree). Table 1. Data amount at different stages. #points / 10 6 #planes #surfaces Point cloud 104 Small planes 1.7 2616 Object surfaces 0.8 1103 24 Tab. 1 shows the data reduction through the stages. The number of points is reduced to 2% through averaging the raw points over texture pixels of a size of 3 3 cm. The complete scene is described with only 2616 planar elements instead of over 100 million points. The grouping step reduces the data again to 1% so that only 24 object surfaces remain. The selection involved with the grouping rejects about half of the small planes as unimportant. 5. Conclusions and future work We have shown a robust preprocessing method to extract planar surfaces from 3D point clouds. The visualization of this intermediate data shows a good coverage of the scene. The grouping approach selected man-made structures very well so that this step can already be regarded as a preclassification. Giving a brief ordered priority list of important probably man-made surfaces of a scene means an important step towards automatically understanding the scene. Future tasks are registration of multiple datasets based on the extracted planes, solving conflicts and refinement of the grouping step and a grouping of surfaces to objects. [1] F. Bretar and M. Roux. Hybrid Image Segmentation Using LIDAR 3D Planar Primitives. In G. Vosselman and C. Brenner, editors, Laser scanning 2005, volume XXXVI-3/W19 of IAPRS, 2005. URL: www.commission3.isprs.org/ laserscanning2005/papers/072.pdf. [2] C. Dold and C. Brenner. Automatic Matching of Terrestrial Scan Data as a Basis for the Generation of Detailed 3D City Models. In O. Altan, editor, Proc. of the XXth ISPRS Congress, volume XXXV-B3 of IAPRS, 2004. URL: www.isprs.org/istanbul2004/comm3/papers/429.pdf. [3] M. A. Fischler and R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, 24(6):381 395, 1981. [4] L. Grammatikopoulos et al. Automatic Multi-Image Photo- Texturing of 3D Surface Models Obtained with Laser Scanning. In Vision Techniques Applied to the Rehabilitation of City Centres, 2004. URL: www.survey.ntua.gr/main/labs/ photo/staff/gkarras/karras cipa Lisbon 04.pdf. [5] S. Heuel. Uncertain Projective Geometry: Statistical Reasoning for Polyhedral Object Reconstruction, volume 3008 of LNCS. Springer, 2004. [6] R. Liu and G. Hirzinger. Marker-free Automatic Matching of Range Data. In R. Reulke and U. Knauer, editors, Panoramic Photogrammetry Workshop, volume XXXVI-5/W8 of IAPRS, 2005. URL: www.informatik.hu-berlin.de/ sv/pr/panoramicphotogrammetryworkshop2005/paper/ PanoWS Berlin2005 Rui.pdf. [7] D. G. Lowe. Perceptual Organization and Visual Recognition. Kluwer, Boston, 1985. [8] H.-F. Schuster. Segmentation of LIDAR Data Using the Tensor Voting Framework. In O. Altan, editor, Proc. of the XXth ISPRS Congress, volume XXXV-B3 of IAPRS, 2004. URL: www.isprs.org/istanbul2004/comm3/papers/426.pdf. [9] M. Wang and Y.-H. Tseng. LIDAR Data Segmentation and Classification Based on Octree Structure. In O. Altan, editor, Proc. of the XXth ISPRS Congress, volume XXXV-B3 of IAPRS, 2004. URL: www.isprs.org/istanbul2004/comm3/ papers/286.pdf.

Year: 2006 Author(s): Hansen, W. von; Michaelsen, E.; Thönnessen, U. Title: Cluster analysis and priority sorting in huge point clouds for building reconstruction DOI: 10.1109/ICPR.2006.1197 (http://dx.doi.org/10.1109/icpr.2006.1197) 2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Details: Tang, Y.Y. ; International Association for Pattern Recognition -IAPR-: ICPR 2006, 18th International Conference on Pattern Recognition. Proceedings. Vol.1 : August 20-24, 2006, Hong Kong Los Alamitos/Calif.: IEEE Computer Society, 2006 ISBN: 0-7695-2521-0 pp.23-26