Object-oriented Model based 3D Building Extraction using Airborne Laser Scanning Points and Aerial Imagery

Size: px
Start display at page:

Download "Object-oriented Model based 3D Building Extraction using Airborne Laser Scanning Points and Aerial Imagery"

Transcription

1 Object-oriented Model based 3D Building Extraction using Airborne Laser Scanning Points and Aerial Imagery Wang Langyue March, 2007

2 Object-oriented Model based 3D Building Extraction using Airborne Laser Scanning Points and Aerial Imagery by Wang Langyue Thesis submitted to the International Institute for Geo-information Science and Earth Observation in partial fulfilment of the requirements for the degree of Master of Science in Geo-information Science and Earth Observation, Specialisation: Geoinformatics. Thesis Assessment Board Chairman: External examiner: Supervisor: Second supervisor: Prof. Dr. A. Stein Dr. C. Brenner Prof. Dr. M.G. Vosselman Ir. S.J. Oude Elberink INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION ENSCHEDE, THE NETHERLANDS

3 Disclaimer This document describes work undertaken as part of a programme of study at the International Institute for Geo-information Science and Earth Observation. All views and opinions expressed therein remain the sole responsibility of the author, and do not necessarily represent those of the institute.

4

5 Abstract Motivated by the increasing demands of the 3D building models, the promising trend of using Airborne Laser Scanning (ALS) data and aerial imagery for 3D building extraction and the popularity of using the object-oriented method for modelling spatial objects in Database Management Systems, this research makes the first attempt to develop and use an object-oriented building model for 3D building extraction by integrating ALS points and aerial imagery. This thesis starts by assuming that the integration of an object-oriented building model, ALS data and aerial imagery is a most promising way for 3D building extraction. The main objective of this research is to develop a practical object-oriented model based method for 3D building extraction by integrating ALS points and aerial imagery. In order to achieve this objective, mainly two problems are addressed in this thesis: to design and develop an object-oriented model dedicated to 3D building extraction and to develop a practical work flow for 3D building extraction by integrating the object-oriented model into the process of 3D building data extraction using ALS points and aerial imagery. First, an analysis of the existing building models, modelling techniques and ALS data and aerial imagery based 3D building extraction methods is performed to provide the theoretical and methodological background and practical experiences for the development of the object-oriented building model and model based data extraction method. Then an object-oriented building model for 3D building extraction is developed using UML by integrating data collection and construction methods, geometry and topology, semantics and properties as well as data storage and management of 3D buildings into one model. This model provides most flexible decomposability for any kinds of buildings. There are also new three building operations for building combination is presented. Based on the object-oriented building model, the roof recognition and construction problem is solved by integrating ALS points and aerial imagery. And then, the object-oriented model is integrated into the 3D building extraction process and finally a workflow with 3 automation levels (automatic, semiautomatic and interactive process) for object-oriented model based 3D building extraction using ALS points and aerial imagery is worked out. Using MATLAB and other related development tools, the work flow is implemented by taking four different type buildings for tests. All the implementation aspects and algorithms are presented. In the meantime, by grouping the program codes, a prototype production system is developed; and using it 3D building models of a small test area are extracted and reconstructed. In the end, by analyzing the extracted 3D building models and discussing and evaluating the practical aspects of the proposed method, it shows that the object-oriented building model developed has more advantages than the other existing models and modelling techniques and can improve and facilitate the 3D building data extraction. The presented object-oriented model based 3D building extraction method is practical and promising with the improvability towards a higher automation degree and extended application potential. The integration of an object-oriented building model, ALS data and aerial imagery can resolve some important practical problems in the development of a practical production system for 3D building data extraction using ALS data and aerial imagery; and can result in highly practical and versatile systems. i

6 Acknowledgements As I began to write this, I got a feeling that what I will obtain is more than an MSc degree. I have learnt lots of scientific and research skills and made many friends during my study in the Netherlands. I would like to take this opportunity to express my sincere gratitude. Until I got the opportunity offered by ITC in 2005, I had been hoping for a higher education for 9 years I am extremely grateful to the International Institute for Geo-information Science and Earth Observation (ITC) and my home organization Heilongjiang Bureau of Surveying and Mapping for giving me such an excellent opportunity to make my dream come true. My deepest appreciation goes to my supervisors: Prof. George Vosselman, Mr. Ir. Sander Oude Elberink and Dr. Stephan Heuel. Thanks for your time to guide and supervise me with so many valuable advices, suggestions and comments, which stimulated me to do this research with great enjoyment. I must say, from you, I have learnt a lot, especially the scientific way of thinking, which is most important for my future professional career. I feel so lucky to have such a combination of supervision from you. Many thanks go to Prof. Alfred Stein, Prof. Menno-Jan Kraak, Mr. Gerrit Huurneman, Dr. Valentyn Tolpekin and all the lecturers and staff in the EOS and GIP department, thank you for the excellent lectures and coordinations during the eighteen months. Also, to the stuff in Bureau Education Affairs, ITC International Hotel and all the other departments in ITC, thank you for your good jobs, which have made me feel like at home and enjoy my time here enormously. Special grateful goes to Prof. John van Genderen for his valuable help and support. Thank you all my classmates in GFM2 for your cooperation, support and help. Especially, Karim, Lim Coco, Ben, Sofi, Ara and Mahnaz, you have made our cluster like a home. It s no doubt that the friendship among all GFM2 students should be unforgettable. Thank you all my international friends. I enjoyed the happiness of sharing different cultures and food with you. Further thanks to my Chinese group friends and the Chinese community at ITC, we have had a rich and colourful life together with a lot of fun and happiness. I know it is hard to name all the people here. I would like to say thank you to all my friends and wish you a happy and successful life in the future. Last but not least, I wish to express my endless thanks to my dear wife Peihua and 4-year-old daughter Xiaoxiao, thank you for being always standing behind. The love and encourage obtained from you make me overcome all the difficulties. I dedicate this thesis to you. Wang Langyue Enschede, February 2007 ii

7 Table of contents 1. Introduction Motivation Reasearch problem Research objective The aim of this research Main objective Sub objectives Research questions Methodology Outline of thesis Analysis of related techniques Building models and modelling techniques Building models Building modelling techniques Comparisons D building extraction using aerial imagery and ALS data Using aerial imagery Using ALS data Aerial image and ALS data based approaches Comparison and discussion Integrating ALS data and aerial imagery Schenk and Csatho Huber et al Jinhui Hu Hongjian and Shiqiang Discussion and analysis Problems of existing 3D building models Limitations of using ALS data and aerial imagery Concluding remarks Development of object-oriented building model Model and object-oriented modelling An object-oriented building model Design aspects Basic assumptions Conceptual model Description and definition of building parts Building relations and decomposition Decomposability and flexibility...23 iii

8 Primitive buildings Relations between buildings Decomposition method and rules Internal relations and attributes Building combination Three building operations Combination method and rules Data structure and storage Data organization for building extraction Data storage Summary Method design Design aspects Design goal Input data Roof recognition and consruction Object-oriented model based approach Workflow Data preparation D building data extraction Implementation platform Implementation of 3D building extraction Define working area Identify object building Automatic edge extraction and simplification Edge extraction Edge linking and simplification Interactive roof recognition Interactive 2D roof edge collection Select ALS points belong to building roofs Obtain the building base elevation Obtain 3D roof faces and face planes Determine the relationship of 2D edges and 3D roof faces Semi-automatic roof recognition Automatic roof recognition Automatic roof construction D building construction and Visualization D building construction Building model visualization Building model refinement Building gluing Building merge iv

9 Refinement for intersecting buildings Building data storage and export Data storage Export VRML A prototype production system Analysis, discussion and conclusion Analysis of extracted 3D building models Model fitting evaluation Geometric accuracy evaluation Discussion on the object-oriented model based method Advantages of the object-oriented building model Practicality of the presented method Future improvements Model improvement Method improvements Conclusion References...79 Appendices...83 Appendix 1: Test data description...83 Appendix 2: Calculation of projection matrix and Matlab codes...84 Appendix 3 : A method to acquire camera projection matrix based on ImageModeler...86 Appendix 4: Data conversion: A MATLAB program for converting LAS file to ASCII file...88 v

10 List of figures Figure 1.1: Research methodology... 5 Figure 2.1: An example of parametric model... 8 Figure 2.2: An example of prismatic model... 8 Figure 2.3: An example of B-rep... 8 Figure 2.4: An example of CSG tree... 8 Figure 2.5: Normalized DSM Figure 2.6: 3D view of extracted buildings draped on DEM Figure 2.7: Flow chart of proposed multisensor fusion framework from (Schenk and Csatho, 2002). 13 Figure 2.8: A polyhedral geometric building model from (Huber et al., 2003) Figure 2.9: Building detection (Huber et al., 2003) Figure 2.10: From raw LIDAR DSM to the final Building Surfaces (Huber et al., 2003) Figure 2.11: Algorithmic structure and workflow of primitive-based modelling system from (Jinhui hu, 2004) Figure 2.12: Bi-direction projection histogram from (Hongjian and Shiqiang, 2006) Figure 3.1: Function of models in geo-science domain Figure 3.2: The conceptual object-oriented building model Figure 3.3: Examples of single building and composite building Figure 3.4: Examples of building parts Figure 3.5: Building relationships Figure 3.6: The difference between on and beneath and touch Figure 3.7: Building decomposition example Figure 3.8: An example building decomposition structure Figure 3.9: UML class diagram of relations and attributes of object-oriented building model Figure 3.10: Merge operation Figure 3.11: Gluing operation Figure 3.12: Clip operation Figure 3.13: An example of combining building operations Figure 3.14: An example structure of building combination Figure 3.15: An example data organization for building extraction Figure 3.16: General data structure for building extraction Figure 3.17: Generic storage of 3D buildings Figure 3.18: Primitive buildings storage Figure 3.19: Data collection and storage rule Figure 4.1: 3D roof construction by integrating 2D edges and 3D roof faces Figure 4.2: Object-oriented model based approach for 3D building extraction using ALS points and aerial imagery Figure 4.3: Workflow for object-oriented model based 3D building extraction using ALS points and aerial imagery Figure 5.1: Illustration of the definition of a working area Figure 5.2: Examples of cut-off building image Figure 5.3: Edge extraction results using Canny Figure 5.4: Edges after edge linking and simplification vi

11 Figure 5.5: Examples of interactively 2D roof collection...50 Figure 5.6: Select ALS points within roofs...51 Figure 5.7: Determine building base elevation...51 Figure 5.8: Results of plane growing segmentation with different parameters...52 Figure 5.9: Estimation and calculation of roof face plane...53 Figure 5.10: Roof face convex hulls...54 Figure 5.11: Estimation of relationship of roof edges and convex hull edges...54 Figure 5.12: Semi-automatic roof recognition of a hip roof...55 Figure 5.13: A planar roof building used for automatic 2D roof extraction...57 Figure 5.14: 2D convex hull and simplification...57 Figure 5.15: Convex hull buffer...58 Figure 5.16: Edges within buffer...58 Figure 5.17: Automatically extracted 2D roof and adjustment...59 Figure 5.18: 3D single-face roof construction...59 Figure 5.19: 3D gable roof construction with...59 Figure 5.20: The result of constructed 3D roof of the gable roof building...60 Figure 5.21: Hip roof construction...61 Figure 5.22: 3D building construction examples...62 Figure 5.23: Adjusted model of Figure 5.22(b)...62 Figure 5.24: Hip roof building construction...62 Figure 5.25: Visualization of planar roof buildings...63 Figure 5.26: Visualization of gable roof building...63 Figure 5.27: Visualization of the automatically and semi-automatically extracted building...63 Figure 5.28: Building refinement examples...65 Figure 5.29: Three cases of using merge...65 Figure 5.30: Building clip process...67 Figure 5.31: Data storage structure of 3D building...68 Figure 5.32: 3D buildings in VRML format...69 Figure 5.33: A prototype production system example...70 Figure 5.34: 3D building models of a small test area extracted using the prototype system...70 Figure 6.1: Extracted 3D building model embedded in original ALS points...71 Figure 6.2: Top view of the three extracted buildings...72 Figure 6.3: An example of object-oriented building model with methods...77 vii

12 List of tables Table 2-1: Comparisons between building models and modelling techniques... 9 Table 3-1: A selection of primitive buildings Table 6-1: Ridge elevation differences of the three building models Table 6-2: Eave elevation differences of the three building models Table 6-3: Eave edge length differences of the three building models Table 6-4: Area comparison of the aces of the hip roof Table 6-5: Comparisons between object-oriented model and the existing building models viii

13 1. Introduction 1.1. Motivation As the major component of future 3D GIS, 3D building models are very important for 3D descriptions of urban areas and for various applications. During the past decade, 3D building model extraction has been an important research topic for many scientists and researchers. Nowadays, with the development of the application of Location Based Services (LBS), virtual reality or augmented reality and personal navigation techniques, it can be seen that the Virtual City and even Virtual Earth will become the next generation platform for various mapping and location services. As a result, the need of up-to-date 3D building data will continue to increase dramatically. Therefore, to develop new and practical methods for 3D building data extraction based on up-to-date techniques is of great value. As a future trend, fusion of different data sources for 3D information extraction is becoming a promising way (Vosselman, 2002). Especially, the integration of Airborne Laser Scanning (ALS) data and aerial imagery has become a hot-spot research topic and a very attractive alternative for the extraction of 3D building data. Aerial imagery is a traditionally important data source for the acquisition of 3D topographic data. But the process is time and cost consuming; and by now, automatically 3D building information extraction still remains an unsolved problem (Zhou et al., 2004). Using ALS data, the 3D building extraction can be achieved automatically. But due to the nature of finite point spacing and the discrete and irregular distribution of scanned points, from ALS data it is not capable of capturing the outlines of buildings directly and accurately. Since aerial imagery is more accurate in determining building outlines and length determination and ALS data is at its best in deriving building heights, extracting planar roof faces and ridges of the roof. ALS data and aerial imagery are complementary in terms of the information which can be extracted from them. Therefore, the integration of the two datasets will make it possible to make an accurate and reliable extraction of 3D building models with improved automation degree. However, on the other hand each data source also has drawbacks respectively, which will affect the data extraction, for example, the occlusions, low contrast and disadvantageous perspective in images. In order to compensate these kinds of affects and be able to handle the overwhelming complexity of building types and structures, a promising building extraction method must incorporate a sufficiently complete model of buildings and their relations (Braun et al., 1995). Nowadays, it is a trend to employ object-oriented method to model spatial objects. A 3D building model defined by object-oriented method with both geometric and semantic knowledge will be of great value for both 3D building data application and 3D building extraction. Therefore, to develop an object-oriented building model and combine it with the 3D building extraction process using ALS data and aerial imagery will be a most promising way. Furthermore, from a methodological and practical point of view, the integration would resolve many practical problems in the development of a practical production system for 3D building data 1

14 extraction using ALS data and aerial imagery, resulting in highly practical and versatile systems and extended application potential Reasearch problem Recently, with the advent and popularity of Geodatabase, object-oriented modelling method has been used for GIS data modelling increasingly, which offers a greater capability for modelling real-world objects. But, until now almost all the researches have concentrated on how to use object-oriented method to represent spatial objects and how to store spatial data inside relational database management systems. As a result, almost no research has been carried out in developing objectoriented models for 3D building data extraction. Therefore, the question if an object-oriented building model can assist and facilitate the 3D building data extraction, or how to use an object-oriented model for 3D building extraction is still no answer. To answer this question, firstly an object-oriented 3D building model used for 3D building data extraction needs to be developed, which will contain a relatively complete concept and definition on how to construct the building model, represent correct building relations and manage building data. Furthermore, to develop a promising method, the object-oriented building model needs to be integrated into the whole process of 3D building data collection, representation and data management. For 3D building data extraction using ALS data and aerial imagery, basically it is an object recognition problem (Brenner, 2005), namely the building recognition problem. So far, many researches have been conducted in developing automatic approaches, but all the automatic approaches are limited in some ideal context. How to deal with the complicated situation to extract the correct roof structure is the bottle-neck of these kinds of approaches. The resulted problem is that there is almost no practical 3D building data production system using ALS data and aerial imagery available. The solution should lie on how to improve and facilitate the building recognition by integrating ALS points and aerial imagery efficiently. Therefore, if the integration of object-oriented building model and ALS points and aerial imagery can improve the building recognition and result in a practical method for 3D building extraction is another problem that needs to be solved. This research will focus on the design of the object-oriented building model and attempt to combine it with the 3D building data extraction to develop a practical model based method Research objective The aim of this research The main aim of this research is to make a contribution to the methodology of extraction of 3D building data Main objective To develop an object-oriented model based method for 3D building extraction by integrating ALS points and aerial imagery. 2

15 Sub objectives 1. To develop an object-oriented building model for 3D building data extraction 2. To integrate the object-oriented building model into the 3D building extraction procedure 3. To integrate ALS points and aerial imagery for roof recognition and construction 4. To design a practical workflow for object-oriented model based 3D building extraction 5. To implement the workflow 6. To analyze the extracted building models and evaluate the method 1.4. Research questions In order to achieve the above mentioned research objectives, these questions below need to be solved: 1. To develop an object-oriented 3D building model for 3D building extraction How to define the structure of the building model? How to represent building relations and attributes? How to construct building models and how to decompose and combine buildings? How to organize data during data extraction and store 3D building models? 2. To integrate the object-oriented 3D building model into the 3D building data extraction procedure How to combine the object-oriented model with the 3D building extraction process? 3. To integrate ALS points and aerial imagery for roof recognition and construction How to integrate 2D edges derived from aerial imagery and 3D planar faces generated from ALS points for roof recognition and construction? 4. To design a practical workflow for object-oriented model based 3D building extraction What s the input data of the workflow? What s the structure of a practical workflow? 5. To implement the workflow How to deal with the large amount of input data? How to implement 3D building extraction at different automation levels? How to refine building models? How to store, visualize and export 3D building data? 6. To analyze the extracted building models and evaluate the method How to analyze and validate the extracted 3D building models? How to evaluate the practicability of the workflow? What are the limitations and achievements? 3

16 1.5. Methodology The research methodology is as illustrated in Figure 1.1: In the first step, the research problem will be defined. Then according to the defined problem, a literature review of 3D building modelling techniques and the state-of-the-art data extraction approaches is necessary to be carried out. Furthermore, in order to gain more knowledge and experiences for the building model and workflow design, comparisons and analysis of different techniques and approaches will be performed. At the same time, preparatory to the development and implementation of the method to be presented the proper test data will be collected and processed. The main data process work is to calculate the camera projection matrix and data format conversion, if no camera parameters provided, some approaches need to be developed to solve the problem. In the second step, based on the knowledge and experiences gained by analyzing related techniques, a suitable structure and the contents of the model will be chosen. Then using UML-based objectoriented modelling method, an object-oriented 3D building model with the definitions of building relations, building model construction methods and data management for 3D building extraction will be devised. In the third step, the object-oriented 3D building model will be integrated into the 3D building data extraction process according to the characteristics of using ALS points and aerial imagery. An objectoriented model based approach will be designed and furthermore, based on the approach a practical workflow will be developed. In the fourth step, software development techniques, automatic edge detection method and related ALS data processing techniques and geometric computation methods will be employed to implement the object-oriented 3D building extraction workflow. Based on these methods, implementation algorithms for every aspect required will be developed. Demonstrations of the 3D data extraction of real buildings with different roof types selected from the prepared test data will also be given to make the processes more clear. Furthermore, based on the workflow a prototype production system will be developed by grouping and organizing the programs during the implementation, which will provide an example for the practical application system development of the proposed object-oriented 3D building extraction method. In the end, analysis of the extracted 3D building models and evaluation of the proposed method will be performed to give an impression of how reliable and practical the proposed method is Outline of thesis According to the research methodology, this thesis is organized into 6 chapters as below: Chapter 1 briefly discusses motivation for the work, defines the problems, state the research objectives, questions, methodology and gives indication how the research will progress. Chapter 2 analyzes the related techniques and summarizes the knowledge and practical experiences gained to provide theoretical and methodological preparation for the method development. 4

17 Problem definition Chapter 1 Step 1 Analysis of related techniques 3D building modelling 3D building extraction Chapter 2 Data Preparation Test data collection Data conversion Computation of camera projection matrix Choose suitable structure and content Appendices Step 2 Object-oriented 3D building model development Chapter 3 Step 3 Combine building model with data extraction Workflow design Chapter 4 Step 4 Chapter 5 Implementation A prototype production system Chapter 5 Step 5 Chapter 6 Analysis and evaluation Figure 1.1: Research methodology 5

18 Chapter 3 is the basic theory part of this thesis. It provides a base for the whole method development and implementation. In this chapter, based on the experiences and lessons gained in Chapter 2, an object-oriented building model used for 3D building extraction is developed with the necessary aspects for 3D building extraction and data management towards a Geodatbase and future objectoriented database management system. Chapter 4 is the method design part. Based on the object-oriented building model developed in the previous chapter, it deals with some aspects of the method to be presented including the input data, how to integrate ALS points and aerial imagery in the method, how to improve the roof recognition and construction, how to combine the object-oriented model with the data extraction and finally works out a workflow for object-oriented model based 3D building extraction. Chapter 5 is the implementation part. It realizes the 3D building extraction based on the workflow by developing a set of algorithms and programs. In the meantime, four buildings with different roof types are selected as test data, based on which the whole workflow is demonstrated. In the end, a prototype production system is introduced. Chapter 6 analyzes the extracted 3D building model results, discusses the practical aspects, existing problems and the future improvements of the proposed method. In the end it summarizes the main innovative parts and achievements of this research and concludes this thesis. The relation of thesis structure and the research methodology can also be found in Figure

19 2. Analysis of related techniques Before developing the object-oriented model based 3D building extraction method, firstly this chapter studies the existing widely used 3D building modelling techniques and the state-of-the-art 3D building extraction methods. In the meantime, comparisons and analysis are performed to obtain a good understanding on the theoretical and methodological background. The knowledge to be gained from this chapter will provide practical experiences for the development of the object-oriented 3D building model and the model based building extraction method using ALS points and aerial imagery Building models and modelling techniques Building models 3D building models are digital three-dimensional computer models that combine data and geometry to describe and represent the buildings in the real world. In order to use the computer to reconstruct and visualize buildings, the computer has to know what a building is and how to represent a building. Therefore, before reconstructing 3D buildings, the definition of a building model about what a building is has to be given. The models either can be explicitly provided by giving a set of building primitives or it can be provided implicitly by declaring rules for extracting features from the original data and for grouping these features in different aggregation stages (Rottensteiner, 2001). Some typical building models widely used are: (1) Polyhedral models Polyhedral model is a very general model. It does not define a building shape explicitly and describe buildings with plane faces, which can provide the most flexible approximation for the presentation of buildings. They have no constraints on the form and type of the buildings and thus can represent arbitrarily shaped buildings (Fischer et al., 1999). The only limitation of polyhedral models is all the faces must be planar surfaces. Therefore, for a building with a dome-shaped roof can not be modelled. (2) Parametric models Parametric models describe a building by a small set of parameters and are also called as parameterized volumetric primitives. The characteristic of this kind of building models is that the topology of a provided building model type is fixed and instances of any object shape will be created by only varying the corresponding parameters. For example, in Figure 2.1, the shape of a building with flat roof can be described by the vector V = (l, w, h) T. Obviously, these models are very effective and suitable for simple buildings that can be described using a few parameters. 7

20 (3) Prismatic models Prismatic models can be deemed as a special case of polyhedral models that describe buildings by arbitrary ground planes and the heights of the buildings. They are able to describe complex buildings with different height parts but they are restricted to flat roofs and vertical walls (see Figure 2.2). h l w Figure 2.1: An example of parametric model Figure 2.2: An example of prismatic model It is worth noting that parametric models and prismatic models are special cases of polyhedral models Building modelling techniques (1) Primitive instancing Based on parametric models, this technique uses and stores a set of pre-defined possible object shapes described by a set of parameters. Then by interactive measuring, translation and rotation, these parameters are determined and consequently the building is instantiated. (2) Sweep methods This kind of methods is based on the fact that some 3D buildings can be created by moving a planar shape along a curve or a pre-defined rule. For example, a cylinder can be created by rotate a rectangle around a pre-defined rotational axis. Therefore this method is well-suited for representing prismatic and rotationally symmetric solid objects (Fischer et al., 1999). (3) Boundary representation (B-rep) This method represents a building by its boundary consisting of a set of faces, a set of edges and a set of vertices as well as their mutual topological relations (see Figure 2.3). It is very flexible and models using B-rep are well-suited for visualization tasks because they readily include all aspects for representing a building. Therefore, it is one of very often used building modelling techniques. Figure 2.3: An example of B-rep (Picture from (Rottensteiner, 2001)) Figure 2.4: An example of CSG tree (Picture from 8

21 (4) Constructive solid geometry (CSG) In CSG, some simplest solid objects called primitives, typically including cuboids, cylinders, prisms, pyramids, spheres, cones, are defined. Then a complex building can be constructed from these primitives by means of geometric transformations and Boolean set operations: union, intersection and difference (see Figure 2.4). Usually, the primitives are parametric models. In practice, CSG and boundary representation (B-rep) are the main and most commonly used modelling techniques in building reconstruction. Primitive instancing and sweep methods are usually employed as auxiliary techniques for some simple or special cases Comparisons According to (Rottensteiner, 2001) and (Fischer et al., 1999), a simple comparison can be made between above-mentioned building models and modelling techniques as shown in Table 2-1. Table 2-1: Comparisons between building models and modelling techniques Polyhedra l models Parametri c models Prismatic models B-rep CSG Advantages Can present buildings in a generic way Represent semantically more meaningful models Building can be explicitly modelled Partially occluded building can also be modelled Can describe arbitrary complex polygonal building ground plans Well-suited for visualization Represent good topological relations Applicable for many building types Very suitable for buildings that are relatively simple and show symmetries Disadvantages Missing semantic classification of the building and building parts Occluded parts cannot be modelled Restricted to planar surfaces Restricted to a small number of predefined building types Only applicable for buildings with flat roofs and vertical walls No semantic information Boolean operations are very difficult Lower automation degree Can not be easily visualized From the comparisons and the above overview of existing models and modelling techniques, it can be seen that so far there is no common language or model for a building (Gülch et al., 2004). The building models and the modelling techniques should be related to the data sources and the data extraction methods. Every model has its own most suitable context. In order to overcome the drawbacks of both different building models and modelling techniques, it seems that hybrid modelling schemes will provide more desirable results (Rottensteiner, 2001). For example, in (Englert, 1998) a CSG based hybrid modeller is used in the system HASE + for semi-automatic building reconstruction. In (Rottensteiner, 2001), a B-rep based hybrid modeller with facilities for local modifications is used for semi-automatic building reconstruction. 9

22 2.2. 3D building extraction using aerial imagery and ALS data With the development of data acquisition techniques, now 3D building models can be systematically built based on a wide range of techniques and data sources, for example, digital photography, laser scanning, and cadastre information bases etc. There are so many approaches using different data source and with varying resolution, accuracy, turn around time and cost, have been developed to reconstruct 3D building models semi-automatically or automatically. But the aerial imagery and ALS data are still the main data sources for 3D building extraction Using aerial imagery For using aerial imagery, stereo photogrammetry is the primary approach for 3D mapping and building extraction. The process involves the manual digitizing of a minimum number of points necessary for automatically reconstructing the buildings. The problem of this kind of approaches is mainly related to appropriate methods of collecting information about the third dimension of buildings. The roof is the main feature needs to be collected as using aerial imagery and the roof complexity is the leading consideration. In order to deal with complicated building roofs, normally the roof shapes are distinguished in three main categories: single-faced roofs (flat roof, pent roof, etc.), multi-faced roofs (i.e. saddle roof), and multi-level roofs (Zlatanova et al., 1998). Then special regulations on how the building has to be captured in accordance with the different types and categories will be specified. The extraction phase is manually to digitize the corners of roof facets in photogrammetric stereo model, thus creating a skeletal point cloud. The reconstruction consists of automatically computing and assembling all the facets of the building from this point cloud. Although advances have been reported in automatic building extraction, human intervention at some stage of the process is still indispensable (Zlatanova et al., 1998). In (Rottensteiner, 2001), a method for semi-automatic building extraction together with a concept for storing building models alongside terrain and other topographic data in a topographical information system is presented. The approach is based on the integration of building parameter estimation into the photogrammetric process applying a hybrid modelling scheme. In this schema, a building can be decomposed into a set of simple primitives that are reconstructed individually and are then combined by Boolean operators just like illustrated in Figure 2.4. The primitives are stored in a database of common building shapes. The data structure of both the primitives and the compound building models is based on the boundary representation methods Using ALS data In addition to photogrammetric techniques relying on aerial images, the generation of 3D building models from laser scanning point clouds are becoming an attractive alternative. This development has been triggered by the sensor technology allowing dense point clouds. (Haithcoat et al., 2001) presented an automatic approach for building extraction and reconstruction from airborne LIDAR data. First a digital surface model (DSM) is generated from ALS data and then the objects higher than the ground are automatically detected from DSM. Based on general knowledge about buildings, geometric characteristics such as size, height and shape information are used to separate buildings from other objects. The extracted building outlines are simplified using an 10

23 orthogonal algorithm to obtain better cartographic quality. Watershed analysis is conducted to extract the ridgelines of building roofs. The ridgelines as well as slope information are used to classify building types. The buildings are reconstructed using three parametric building models (flat, gabled, hipped) (See Figure 2.5). Figure 2.5: Normalized DSM Figure 2.6: 3D view of extracted buildings draped on DEM A comparison between the automatically extracted buildings and reference data manually digitized from aerial photograph with 0.25m resolution was also made. The residential area has 79 buildings. 93.7% buildings are extracted. The roof type classification is quite good with 90.9% correctness. All the 12 buildings in downtown scene are extracted, but two large vegetation areas are extracted also. One building is misclassified. The experimental results are promising. Because of the high point densities provided by modern laser scanners, most roof faces often can be detected reliably without making use of shape primitives. But automatic procedures may fail in recovering the correct information due to the complexity of the scene. Therefore, interactive tools for editing the results are necessary. There are also many papers concerning the reconstruction of buildings from ALS data without using additional information sources, for example, (Vosselman, 1999), (Maas and Vosselman, 1999), (Rottensteiner and Briese, 2002), (Elaksher and Bethel, 2002) and (Madhavan et al., 2006) Aerial image and ALS data based approaches In order to make more accurate and automatic 3D building extraction and reduce the complexity of the reconstruction problem, many researches have also been done to take the advantage of existing map or GIS data. (1) Combining aerial imagery and existing 2D data In (Suveg and Vosselman 2004), the footprints of the buildings in the existing map were used in almost all the steps of the approach. Firstly, a building can be localized in an image, and its region of interest can be delineated in the image. Then, the rest of the processing is done only on this region. Secondly, the footprint can give a good hint about the structure of the building. Therefore, it can be used to derive hypotheses on the decomposition of the building in simple building primitives. Furthermore, the footprint can provide the initial values at the generation of building hypotheses. In the final verification step, the CSG tree describing a building is fit to the image data. According to an experiment, the approach was able to reconstruct more than 75% of the buildings, and the accuracy of the reconstruction is good enough for mapping purposes. (2) Combining ALS data and existing 2D data 11

24 In (Brenner, 2000), (Vosselman and Dijkman, 2001) and (Schwalbe et al., 2005), methods combining ALS data with existing ground plans of buildings, 2D cadastral maps or topographic data were proposed in order to enable an automatic data capture by the integration of these different types of information. The use of existing 2D information yields better success rates especially in areas with small buildings and flat roofs because of the uncertainty of the determination of the buildings orientation from ALS data. Otherwise, in areas with large buildings where the roof faces are rather steep and represented by numerous points the orientation can be derived more accurately from ALS data than from ground plans. Due to this fact in some cases the use of ground plan information even decreases the success rate Comparison and discussion Using aerial imagery and ALS data (1) Accuracy According to the results from the EuroSDR building extraction test (Kaartinen et al., 2005), in general, photogrammetric methods were more accurate in determining building outlines and length determination. Laser scanning is at its best in deriving building heights, extracting planar roof faces and ridges of the roof. In building outline determination, point density, shadowing of trees and complexity of the structure are the major reasons for site wise variations of the ALS data based approach. In building length determination with laser scanning, the complexity of the buildings was the major cause for site wise variation rather than the point density. (2) Degree of automation The laser data allows higher automation. But for complex building models extra editing are needed, which may slow down the process. Even though some ALS data based processes are relatively automatic, the processes are still under development. Due to the complexity of the full automation with photogrammetry, majority of development work has focused on semi-automatic systems, in which recognition and interpretation tasks are performed by the human operator, whereas modelling and precise measurement is supported by automation. (3) Applicability Using aerial imagery is proven to be stable, accurate and operational. In general the methods using aerial images and interactive processes are capable of producing more details in building models. Using ALS data, although the automation degree is high, the accuracy of the final building model is lower. Furthermore, it is only applicable for relatively simple buildings. For buildings with complex structures, it seems that only using ALS point data it is very hard to make a good 3D building extraction. 12

25 Using existing 2D data Using existing 2D data has some problems. Sometimes existing geospatial data is less accurate and complete and topological errors may exist. Existing data is not generally available. Sometimes existing data refer to a different coordinate system and are not up-to-date (Baltsavias, 2004). Moreover, most of the time the building footprints of existing data are not at the eaves level and roof details are not shown in the ground. Therefore, the fusion of maps with images or laser scanning data does reduce the complexity of the problem but does not enable a complete solution Integrating ALS data and aerial imagery It seems only using either of ALS data and aerial imagery can not achieve satisfied results. Therefore, fusing ALS data and aerial imagery for 3D building extraction has become the most promising future trend. As a hot-spot research topic, there are several related research works have been done to integrate the ALS data and aerial imagery to extract 3D building models Schenk and Csatho 2002 In (Schenk and Csatho, 2002), a workflow of multisensor fusion was proposed (see Figure 2.7). It mainly described two aspects of merging aerial imagery and ALS data to reconstruct surfaces, namely the establishment of a common reference frame and fusion of geometric and semantic information for an explicit surface description. Figure 2.7: Flow chart of proposed multisensor fusion framework from (Schenk and Csatho, 2002) The processes on the left side are devoted to referencing aerial images (A) to ALS data (L) using invariant features. Besides the direct orientation, there are also several other methods proposed: orientation based on 2D image edges and 3D ALS edges, orientation based on 3Dmodel edges and 3D ALS edges, orientation with surface patches and alternative solution with range images. Based on the aligned aerial images (A ) to ALS data (L ), the processes on the right side are aimed at the reconstruction of the 3D surfaces by feature-based fusion mainly as below: 13

26 Orient the stereopair (two images) to the laser point clouds. Find planar surface patches in the ALS data and their boundaries. Automatic edge detection on both images. The edges obtained in both images are then projected back into the object scene. Combine the boundaries that derived either from ALS point clouds or from aerial images, or from a combination to form a description of the surface. But the paper does not give a complete example of a 3D building model Huber et al In (Huber et al., 2003), a model based approach was proposed which is performed within the scope of a general surface estimation process. Above all, a polyhedral geometric building model was defined (see Figure 2.8). A building comprises a building roof and walls. Roof boundary (dashed black line) separates roof from wall. Wall plane (dashed gray line) has a certain slope. Roof plane boundary (solid gray line) encloses a roof plane surface having a minimum size and minimum angle with the wall plane. A common roof plane outline (dotted line) connects two roof planes. Figure 2.8: A polyhedral geometric building model from (Huber et al., 2003) The automatic reconstruction includes two iterations. In the first step (see Figure 2.9), from ALS data, the tree areas were detected according to the height difference between the first and last pulses. The roof areas were extracted using a region growing and a geometric reasoning. By interpolation a range image based on raw ALS data was created, and then edges were calculated from the gradient magnitude image of the range image. (a) (b) (c) (d) (e) Figure 2.9: Building detection (Huber et al., 2003) a) Test building. b) Result of blob extraction. c) Result of edge extraction. d) Range data represented as grey value image. d) Gradient magnitude image Meanwhile, from image, image edges and homogeneous regions were extracted. Then, all the data were fused to get hypotheses for the position and shape of the buildings. Image edges were projected 14

27 into the ground system by intersecting with the raw ALS point triangulation the compared with the ALS edges. Finally a preliminary surface was obtained (see Figure 2.10). In the second step, the exact reconstruction of buildings was performed. The 2D image edges were projected into the corresponding infinite roof plane. Then, an edge clustering processing based on two histograms, a histogram for the orientations and a histogram for the positions of all projected edges was applied. The results were a few infinite lines whose intersection defines the boundary of one roof plane. In the end, the final surface estimation based on the aggregated information of the tree area data derived from ALS data, image segmentation data and the building polygons were performed to obtain the final surface (see Figure 2.10). (a) Raw LIDAR DSM (b) Preliminary estimated surface (c) Final surface Figure 2.10: From raw LIDAR DSM to the final Building Surfaces (Huber et al., 2003) The method was test and showed good results with different building types, including flat roofs, gable roofs and combined structures. But tests with complex roof structure were not performed yet Jinhui Hu 2004 (Jinhui Hu, 2004) presented a primitive-based modeling system to create a hierarchical building model composed of geometric primitives using airborne LIDAR and aerial imagery (see Figure2.11). Figure 2.11: Algorithmic structure and workflow of primitive-based modelling system from (Jinhui hu, 2004) The approach is a hierarchical technique that allows users to create a hierarchical building model composed of geometric primitives. Linear primitives and high-order surface primitives are used for model fitting and refinement. To improve accuracy and efficiency, it employed image information to 15

28 aid model and refine processes. Both the knowledge-level and pixel-level information were used. The texture and colour information from aerial image was used to automate the segmentation process. Building shape cues from range image were used to reduce the number of model hypotheses and computation complexity. Edges from high resolution aerial images were used to improve the model accuracy Hongjian and Shiqiang 2006 In (Hongjian and Shiqiang, 2006), a 3D building reconstruction approach based on aerial CCD image and sparse ALS data was presented. One advantage of CCD images is that the geometric shape of a building can be detected more reliably First, an edge detecting algorithm combing Laplacian edge sharpening with the threshold segmentation was developed and employed to detect the edges and lines on the images. Then, a method using bi-direction projection histogram was used to determine the corner points of buildings and extract the contour of the building by searching and matching gradually (see Figure 2.12). The four corners of the building can be extracted by combing the two directions according to the direction histogram. The heights of the building were calculated according to the Laser points within the building boundary. Figure 2.12: Bi-direction projection histogram from (Hongjian and Shiqiang, 2006) Because of the limitation of using the bi-direction histogram and the method to obtain the heights of the roofs, the proposed method seems to be only suitable for the buildings with rectangular shapes and flat roofs. It is very hard to apply it for complex building reconstruction Discussion and analysis Problems of existing 3D building models It is can be seen that all the 3D building models in use contain no semantic information, or the semantics are only represented in attributes. They only represent the geometry to fulfil the visualization of 3D buildings. But, in order to increase and develop more applications for 3D building models, all the geometry parts of a building need to be associated with semantic meanings so that new applications can be realized. Furthermore, a semantic driven visualization can also improve the usability of the building model (Benner, 2005). The existing building model also can not represent building relationships, which are very important for spatial analysis, operations between buildings and 3D building extraction. Another problem is that the existing building models seem not suitable for combining ALS points and aerial imagery. The existing building models only give a description of 3D buildings but do not give a detailed instruction on how to construct different kinds of buildings. Since for using aerial imagery 16

29 and ALS data only the roof can be measured directly, for complicated buildings the decomposition and combination of buildings are both necessary for solving the problem. A good building model must also contain the construction rules. Otherwise, it can not deal with complicated scenes Limitations of using ALS data and aerial imagery Besides the promising trend and the many advantages, there are also some limitations of using ALS data and aerial imagery. Realizing these limitations will help to develop a more practical workflow. Now, as a common part of all the methods, digital image processing techniques are used for the automatic extraction of edges/lines in the images. But no matter what kind of algorithm being used, one can not obtain good results in all cases, for example some edges may be not detected or illdetected. Furthermore, one will get a large number of edges/lines and most of the time the context is very complicated, which make it very hard to extract the correct edges. Another limitation of aerial images is the occlusions and geometric constrains. Therefore, in practice, the reconstruction assumes that buildings have only vertical walls, without windows and doors, and non-over-hanging roof facets. For automatic ALS data processing, the filtering and segmentation algorithms also cannot guarantee that all the points will be determined and classified correctly. Also because of the large volume of data, the cost of processing is high and therefore it also needs to be taken into account. Also, all the methods combining ALS data and aerial imagery have problems in dealing with the reconstruction of 3D buildings with complex structures. Because most efforts have been given to unrealistic full automation, so far very few research efforts can find their way to a practical and even commercial system (Gülch et al., 2004). In a word, it seems use of knowledge and semi-automation are the only doable way to develop a useful and operational 3D building extraction method. But the combination of ALS data and aerial imagery will improve the automation; and the automatic process can also be employed and can produce good results in ideal contexts. Therefore, a method combining both semi-automation and automation methods should be feasible, operational and can finally obtain good results Concluding remarks From the study and analysis above, we can draw some conclusions: A good building model for 3D building extraction should contain geometry, semantics and building relationships. A building model suitable for integrating aerial imagery and ALS data should take the characteristics of the data sources into account and contain data collection and building construction methods. Boundaries inferred from ALS points are fuzzy and uncertain but can get good roof faces, whereas edges detected from images are accurate but it is hard to find the correct ones. Therefore, the integration of aerial imagery and ALS data should aim to improve the building roof recognition and reconstruction. Although many efforts have been made to combine the two kinds of information to achieve automated processing; full automation of 3D building extraction is not practical. A method combining several approaches with several levels of automation: automatic, user-assisted 17

30 and manual for better 3D building model production is highly proposed (Flamanc et al., 2003). With the current techniques and the inspiration of the wide demands of 3D building data, to develop a practical method for 3D building extraction is feasible and very promising. According to the knowledge and experiences obtained here, in next chapter an object-oriented 3D building model will be developed for integrating ALS data and aerial imagery for 3D building extraction. 18

31 3. Development of object-oriented building model This chapter aims to design and develop an object-oriented building model for 3D building data extraction. Object-oriented method has proven to be efficient and applicable in many fields. With the advent of Geodatabase, there is also a trend to use it for modelling spatial objects. Object-relational DBMSs (Database Management Systems) have been widely used to manage and store diverse collections of geographic information types. Furthermore, they have been extending to manage 3D data. Although, so far for object-oriented DBMS (OODBMS) there still has some technical problems that need to be solved towards practical uses, it is no doubt that it s the most promising technical trend of DBMS. By using object-oriented modelling method, the limitation of existing building models that are only geometrical models without semantic and topological aspects can be solved. For example, CityGML provides a common information model for the representation of 3D urban objects (CityGML website). By using object-oriented method, it defines classes and relations for the most relevant topographic objects, including buildings, in cities and regional models with respect to their geometrical, topological, semantical and appearance properties, which make it can be used not only for visualization purposes but also for thematic queries, analysis tasks, or spatial data mining. The application capability of 3D building models is improved. There are also some other researches on 3D semantic building models using object-oriented method. For example, in (Benner, 2005) a 3D semantic building model using object-oriented method dedicated to urban development is presented. It can be seen that a 3D building model defined by object-oriented method with both geometric and semantic knowledge is of great value. But it is also worth noting that so far, all the researches using object-oriented method to model buildings only focus on how to improve the representation and applicability of the 3D building models. No research has been carried out on how to design an objected-oriented building model to improve the data extraction process for 3D buildings. Consequently, now there is no object-oriented building model dedicated to 3D building extraction. It can be expected that an object-oriented building model is also promising for 3D building extraction, because: Buildings themselves are objects. An object-oriented model itself is a hierarchical description of relations between objects or subclasses that constitute building objects, which also can be regarded as a decomposition model of the building object or class. This makes it very suited for 3D building data extraction and provides a good idea for extracting complex buildings. Object-oriented method could be the best way to integrate data collection and model construction, geometry and topology, semantics and properties as well as data storage and management of the 3D buildings into one model. It not only can make the models represent the buildings more realistically but also is in line with the technical development trends. 19

32 3.1. Model and object-oriented modelling A model is a representation of what we consider essential (or important) about an object or situation. In geo-science domain, models are the abstraction of the real world objects. They are used to make an abstraction of reality with the aim to make reality understandable (Stoter, 2004) and to interpret the world and show the people what the world is. Real world Abstraction Models Visualization Virtual reality Figure 3.1: Function of models in geo-science domain As we know, the real world is a three-dimensional space and everything in the world is a 3D object. In 3D computer graphics, a 3D model is a mathematical representation of a three-dimensional object. Object-oriented modelling just uses objects to represent real-world objects and make a direct correspondence between real world objects and their computer representation. In object-oriented models, an object is the representation of a real world object having spatial and thematic characteristics, for example topology, location, size and shape, name and other attributes. The main characteristics of object-oriented modelling are: everything in the model is an object and an object can be made up of other objects. In object-oriented models, classes are abstractions of objects in the real world and an object is an instance of a class. Objects have attributes and there are relationships existing between objects. Both objects and relationships have constraints, for example topological constraints and semantic constraints, and many operations or methods can be performed on objects An object-oriented building model Design aspects For good and practical usages of the 3D building models, there are five aspects of the representation need to be taken into considerations: geometry, topology, semantics, data storage and visualization. In an object-oriented method, geometric shapes will be represented by objects, which can be composed of or aggregated by subgeometries or subobjects by value or by reference. Topology can be expressed by the relations and connections between these components or the references. The integration and manipulation of the partial models by the object-oriented method should be practical. Semantics will be expressed by coherent aggregation of spatial and semantic components. Each component will also contain relevant thematic attributes. For example, a building can be composed of roof, wall and floor. The component roof may have thematic attributes such as roof type, materials, etc. The data structure and storage of the building model should be suitable for 3D building data extraction and meet the requirements of maintaining 3D building data using DBMS. The visualization will be implemented according to the data structure. 20

33 The structure of the building model will be extendable so that the users can extend it according to their special cases. A complex building can be decomposed into simple buildings and the decomposition should be flexible and be able to deal with complicated scenes Basic assumptions (1)Buildings are restricted to polyhedral objects formed by a set of bounding faces, which are assumed to be plane surfaces described by the equation: Ax+By+Cz+D=0 (3-1) (A, B, C) are the components of the normal vector of the surface. For spherical and cylindrical faces, they also could be approximated by a set of flat faces. (2) Buildings can be represented by a combination of simple basic primitive buildings, that is, a building can be decomposed into one or more primitive buildings. As using ALS data and aerial imagery, roofs are the primary objects that can be measured or derived directly. Therefore, the primitive buildings can be classified by their roof types. (3)Every building consists of three building parts: roof, wall and floor. Walls of a building are vertical. The ground floor of a building is always level. (4)There are no overhanging buildings allowed. (5) In reality, buildings may have installations, for example dormers, chimneys, balconies etc., but in this research we only focus on buildings Conceptual model The concept and structure of the 3D building model proposed in this research is shown in Figure 3.2. In general, the 3D building model can be divided into two classes: single building and composite building. 3D Building Model Single Building 2..* Composite Building 1 3..* 1 Roof Wall Floor 1..* 1..* Roof face 2..* 4..* Figure 3.2: The conceptual object-oriented building model 21

34 A simple and essential divided line between single building and composite building is: a single building has only one roof, whereas a composite building has two or more roofs. A single building is composed of 3 building parts and each part is annotated semantically, namely roof wall and floor. It is worth noting that for a single building it only consists of one floor and one roof but at least 3 walls. And the roof may be composed of one or many roof faces according to the various roof structures. A composite building can be either aggregated by two or more buildings and each building is connected to other buildings according to the relationship defined in Section 3.3.3, or combined by two or more buildings using the building operation defined in Section For example, in Figure 3.3, (a) is a single building with a saddle roof; (b) and (c) are composite buildings, in which (b) is aggregated by two buildings with flat roofs and (c) is combined by two buildings with flat roofs. It also can be seen that the ways of the aggregation are different, which will be described another sections below. (a) A single building (b) A composite building (c) A composite building Figure 3.3: Examples of single building and composite building Description and definition of building parts The object-oriented building model, either a single or a composite building, is composed of three building parts: roof, wall and floor, which are all 3D planar polygonal objects (see Figure 3.4). The roof is the top covering of a building within a closed roof edges roof boundary and can be composed of several connected single planar roof faces. The roof boundary polygon consists of most outer edges of the roof. Roof Wall Floor (a) A single gable roof building (b) Roof boundary of (a) (thick lines) Roof face polygons Floor polygon Wall polygons (c) Building parts of (a) Figure 3.4: Examples of building parts 22

35 The floor is the projection of the roof boundary polygon on either the ground in case that the building is located on the ground or a roof of another building on which the building is located. And each wall is a vertical polygon bounded by the corresponding roof edge(s), the floor edge and the connective lines between the endpoints of the roof edge(s) and the corresponding end points of the floor edge Building relations and decomposition Building relations are important for spatial analysis in a GIS context. For 3D building data extraction, they are also very important for building decomposition, which is crucial for the data extraction of complex buildings. It can be seen form the structure and the definition of the conceptual objectoriented building model, one most important characteristic of the proposed building model is decomposability Decomposability and flexibility The main characteristic of the proposed building model is the decomposability. It includes two levels of meanings: Any building can be decomposed into one or more primitive buildings. One primitive building itself can be decomposed into three parts. Another characteristic of the proposed building model is flexibility, which means that the decomposition can be very flexible. In many cases, a building can be treated as a single building or a composite building according to the scene and the difficulty of data collection. In order to make a better data extraction, sometimes a single building can also be decomposed into two or more buildings, for example, if needed the single gable roof building in Figure 3.4(a) can be decomposed into two single shed roof buildings Primitive buildings The decomposition of the building is based on the interpretation of the geometry of the building roof. Theoretically, every building can be decomposed into a set of simplest flat roof buildings. But we can select and define more primitive buildings and each primitive building has an identified roof type, fixed structure and topological relations. To use primitive buildings will have many merits: Save working time and simplify the decomposition process. Improve automation degree and a lot of buildings can be extracted more automatically. Simplify the data storage and topology can be easily maintained. In this research, seven primitive buildings are selected and defined as listed in Table 3-1. The selection of the primitive buildings is based on the roof type and the knowledge of realistic buildings. And, all the primitive buildings are single buildings. The most important thing for a primitive building is the number of roof face. The pictures just give some limited samples of the common characteristics of every kind of primitive buildings. In reality, the shapes of a primitive building maybe vary from case to case but the essential definitions, for example the number of the roof face, are the same. For a single-face roof building, it has only one planar roof face no matter how many edges the roof has and the roof face is also not necessary to be level. A gable roof has two slopes that come together 23

36 to form a ridge at the top, which means it has two roof faces. A hip roof is one that slopes go upward from all sides of the building and form 4 roof faces and the two longer sides forming a ridge at the top. The semi-hip roof building is a special case of hip-roof building with one of the short sides no slope and therefore, it has 3 roof faces. A hipped-gable roof building can be deemed to be a gable roof building with the two ridge corners cut off, which make it have 4 roof faces: two big roof faces and two small faces. Mansard roof building is one that slopes go upward from all sides of the building and intersect with a flat roof on the top. Thus it has 5 roof faces. Pyramidal roof building is a building with the roof shaped like a pyramid. The number of the roof face depends on the number of roof edge. Table 3-1: A selection of primitive buildings Primitive building Roof type Primitive building samples Single-face building Gable building roof roof Flat roof or Shed roof or lean-to Gable roof or saddle roof or saltbox roof Characteristics Only 1 roof face plane The number of roof edges equal to the number of floor edges 2 roof faces 1 ridge Hip roof building Hip roof 4 roof faces 5 ridges One roof edge belongs to one roof face Semi-hip building roof Semi-hip roof 3 roof faces 3 ridges Hipped-gable roof building Mansard building roof Hipped-gable roof or Cut-off gable Mansard roof Four roof faces Five roof faces Pyramidal building roof A-frame roof or pyramidal roof The number of roof faces equals to the number of roof edges Roof faces are triangles and intersect at one point 24

37 It is worth noting that the selection and definition of primitive buildings is extendable. For example, other primitive buildings can be added into this collection according to specific regions because the building structures and types vary from region to region. But the primitive buildings presented in this research is different from the building primitives already used in many researches, which is described in Section Relations between buildings In 3D spatial space, the buildings also have relations between each others. In this research, the spatial relationships of 3D building bodies are generally classified into 3 categories: detached, touch, on and beneath. Detached means that two buildings are separated in the 3D space and no any parts of them meet (see Figure 3.5 (a)). Touch is the horizontal relationship of two buildings on the same level, namely either on the ground or on the same roof. In this case, if one or more parts of a building meet(s) one or more parts of another building the relationship between the two buildings is called touch. Touch is a mutual relation. It means that if building A touches building B, building B also touches building A at the same time (see Figure 3.5 (b)). (a)detached (b) Touch (c) on and beneath Figure 3.5: Building relationships On and beneath are conjugate vertical relationships of two buildings. The relationships are defined as: two building A, B and the floor of A vertically meets the roof of B, (1) if the floor polygon of A is spatially within the roof polygon of B, and (2) there is no any common edge or same edge between the two polygons then A is on B and B is beneath A. According to the definition, the relationship touch and on or beneath can be differentiated. For example, see Figure 3.6, in (a) the floor of building A is within the roof boundary polygon of B, therefore, A is on B and B is beneath A. In (b), although one edge of the roof polygon of building A is collinear with part of a roof edge of building B, but the two edges are not exactly the same. That is, there is no a common edge between the two buildings. Thus, still A is on B and B is beneath A. If the real building looks like in (c) and we say A is on B and B is beneath A like in (d), it will be not correct because there is a common edge between building A and B. Therefore, in this case, the correct relationship between A and B is A touches B and B also touches A like in (e). The building relations defined here is very important for building decomposition when extracting 3D buildings. 25

38 A A B B (a) (b) (c) A A B B (d) (e) Figure 3.6: The difference between on and beneath and touch Decomposition method and rules When decomposing a composite building into primitive buildings, the following steps and rules must be followed: (a)side view of a composite building (b)top view of the same composite building Step 1 Roof A1 Roof B1 Roof C1 A1 B1 C1 Step 2 A2 A3 Roof A2 Roof A3 Roof B1 Roof C1 A1 B1 C1 Roof A1 Step 3 A4 A2 A1 A3 B1 C1 Roof A4 Roof A2 Roof A1 Roof A3 Roof B1 Roof C1 Figure 3.7: Building decomposition example 26

39 First, all the primitive buildings located on the ground will be distinguished according to the horizontal relationship. These buildings are named as base buildings. Second, for every base building, it will be checked if there are buildings on their roofs according to the vertical relationship. For every base building with one or more buildings on its roof, from bottom to top all the buildings on different level roofs will be distinguished. Buildings on the same roof will be distinguished according to the horizontal relationship and the identifying order is according to their height and from low to high. This step may contain many iterative steps until all the primitive buildings are distinguished according to the roof structures. Only after finishing the decomposition based on one base building, the decomposition based on another one will begin. For example, in Figure3.7 there is a complex building given. (a) and (b) are views from side and top of the building. In step 1, according to the rules and the roof structure of the composite building, three base buildings are distinguished. They are A1 with flat roof, B1 with gable roof and C1 with flat roof. All of them are located on the ground with defined roof types. Then in step 2, decomposition will be performed based on every base building A1, B1 and C1. Because there are no buildings on both B1 and C1, they do not need to be decomposed further. For base building A1, there is another composite building with two flat roofs on its roof. Thus building A2 and A3 can be distinguished. In step 3, A2 is lower than A3; and there is a building on its roof. Therefore building A4 with flat roof is distinguished firstly. There is no building on the A3 s roof. So the decomposition is finished. Figure 3.8 shows the decomposition structure of the complex building in Figure 3.7 and the relations between the decomposed primitive buildings. A4 Beneath On A2 On Beneath Touch A3 On Beneath A1 B1 C1 Touch Touch On On On Ground Figure 3.8: An example building decomposition structure Based on the above decomposition method and rules, the data collection method in real 3D building data extraction is described in Section Internal relations and attributes In section 3.3.3, the spatial relations of buildings are defined. Those relations are very important for the building decomposition. Besides external spatial relations, the relationships within the building model, namely internal relations and attributes are also need to be defined. These relationships are 27

40 crucial to maintain the internal topology of a building and furthermore also important for the building combination. In the proposed object-oriented building model, every object is a class. Therefore, the internal structure and topology of the building can be maintained by inheriting relationships between the corresponding classes. The properties of a building and building part objects can be implemented by attributes of the corresponding classes. Thus, every object knows to which class it belongs and which attributes and methods it owns. The multiplicities of the relationships between two classes are also implemented as attributes of classes. 3D Building Model + ID + Class + Name + BuildingType + Usage + Function + Owner + Height + Storey + BaseElevation + IfaBasedBuilding + Roof + Wall + Floor + TouchingBuilding + OnTopof + Beneath Single Building 2.. * Composite Building + Numofroofs + Buildings + BeseBuildings 1 Roof 3.. * 1 Wall Floor + ID + RoofType + NumofFaces + NumofEdges + TouchingRoof + RoofPolygon + ID + ConnectedWall + TouchingWall + ID + IfonGround + OnWhichRoof 4.. * 1.. * 2.. * 1.. * Roof face + ID + ConnectedFaces + FacePlane Figure 3.9: UML class diagram of relations and attributes of object-oriented building model 28

41 Figure 3.9 shows the UML class diagram of relations and attributes of the object-oriented building model. Usually, buildings have many common attributes as listed in the 3D building model. For example, Name, Usage, Owner, Height, Storey and above all, every building object should have a unique ID. The class building and composite building can inherit all these attributes. Besides, there are also some attributes are employed to store the relationships among the classes. The attribute roof, wall and floor are used to store the IDs of the roofs, walls and floors of which the building is composed. Thus we can know which roofs, walls and floors belong to which buildings and which buildings are composed of which roofs, walls and floors. The relationships between buildings can be obtained from the attribute TouchingBuilding, OnTopof and Beneath. For example, if building A is on building B then the ID of A will be stored in the attribute Beneath of B and the ID of B will be stored in the attribute OnTopof of A. In the class Composite Building, there are also another three attributes. NumofRoof is used to store the number of roofs. The attribute Builings and BaseBuildings are use to store the IDs of the buildings and the base buildings of the composite building. In the class Roof, from the attribute RoofType, NumofFaces and NumofEdges we can know the roof type, the number of roof faces and edges of a roof. Furthermore, the attribute TouchingRoof will give the IDs of the roofs touching it. So does the attribute TouchingWall in the class Wall. Besides, the attribute ConnectedWalls will shows with which walls a wall is connected. The RoofPolygon stores the roof boundary polygon. The Boolean attribute IfonGround in the class Floor tells if the floor is on the ground. If not, the attribute OnWhichRoof will show on which roof the floor is. In the class Roof face, the coefficients of the face plane are stored in the attribute FacePlane. It can be seen that from the class diagram, the relationships both between buildings and between building parts are well defined. It is also worth noting that if needed, more attributes can be added to any class in the diagram. In other words, the definition is extendable Building combination Building combination is a reverse process of building decomposition. After the decomposition of a composite building, the extracted primitive buildings need to be combined to form and restore the composite building just as what it is. Building decomposition can facilitate the extraction of the building data by splitting the building into building units and primitive buildings; and building combination will combine these primitive buildings and building units to make a correct building description by performing some combination operations. Inspired by the advantages learned from the CSG model, this thesis introduces three operation algorithms for building combination. The difference is that for CSG model we use Boolean set operators (union, intersection and difference ) to combine simple objects such as cubes, boxes, tetrahedrons or quadratic pyramids to form more complex objects, but in this research the three building operation algorithms to be introduced are employed to combine object-oriented building models to form correct more complicated building models. Therefore, the algorithms are different. The concepts and principles of the three operations are descried below. The applications using the four operations for building refinement are implemented in Section

42 Three building operations For the operations between buildings, in general they can be performed possibly in two directions: vertically and horizontally. Because in the presented model if a building on top of another building its floor is calculated by projecting the roof boundary polygon onto the roof face on which it is located, there will be already touch relationship between the two building parts. In other words, the two building have been already glued together. Therefore, the operations introduced here are all horizontal operations, that is, the two buildings that need to be operated must be either on the ground or on the same roof. (1) Merge Merge is used to join two primitive buildings, or two composite building or one primitive building and one composite building. The result could be either a composite building or a primitive building. When two buildings meet each other the common parts of the touching walls will be deleted. Then the other parts of the two buildings will be connected and form a new building (see Figure 3.10). Obviously merge is a wall-based computing algorithm. (a) Two buildings (b) After merge (c) Wireframe view Figure 3.10: Merge operation (2) Gluing Gluing is also a wall-based computing operation used to glue two buildings together. It is mostly like merge and the difference is that for gluing, gluing just makes the common parts of two the touching surfaces coincident and doesn t delete any building parts (see Figure 3.11). (a) Before gluing (b) After gluing (c) Wireframe view Figure 3.11: Gluing operation (3) Clip Clip is employed to use one building to cut another building. In order to perform clip the two buildings must be intersecting. By using this operation, the part of one building within the vertical space of another building will be removed from the former building. Then, the roof, wall and floor of this building will be regenerated to form a new building. See Figure 3.12, the building in (c) can be obtained by 2 buildings in (a) using clip. Because during the clip operation the roof shape and structure of a building will be destroyed and reformed, the building will be reconstructed using the new roof. Therefore, clip is a roof-based computing algorithm. 30

43 A B (a) Two buildings (b) Clip B from A Figure 3.12: Clip operation (c) Building A after clip One important property of the above presented building operations is: they can be combined in an order to deal with more complicated scenes. Figure 3.13 shows an example to combine the clip and merge to extract a composite building. The order is: first using clip then using merge operation. In Figure 3.13, using the decomposition method described in Section 3.3.4, the composite building C in (a) can be decomposed into two buildings: A and B. By clipping B from A we can get building D. Then, by performing merge operation on D and B the building C can be reconstructed correctly as shown in (d). Building C decomposition A B clip D (a) A composite building (b) After decomposition (c) After clip D merge B (d) Reconstruct C by merge of D and B Figure 3.13: An example of combining building operations In Section 5.9.3, real examples of combining building operations to refine intersecting buildings in different cases are given Combination method and rules In order to obtain correct results, some rules must be followed as combining buildings. (1) The priority of the building operations The order to perform the three building operations is clip, merge and gluing. That means clip is prior to the other operations. (2) Top-down procedure Because all the three building operations are horizontal operations, for a multi-level complex building the combination should be performed from top to bottom vertically. 31

44 For example, in Figure 3.7, the complex building was decomposed into 6 single buildings after the decomposition process. Then, the complex building can be reconstructed by combining these 6 buildings using building operations according to the steps in Figure In step1, there is no operation needed because there is only one building. Then merge is preformed to A2 and A3.in step 2. In step 3, the three base buildings are united into one composite building using merge. Thus, the original complex building in Figure 3.7 is reconstructed. Step 1 A4 Step 2 A2 merge A3 Step3 A1 merge B1 merge C1 The complex building Figure 3.14: An example structure of building combination 3.6. Data structure and storage In general, the data of 3D building models can be organized and stored using two ways: file and database. The data management and storage can be divided into two stages. At the stage of data extraction, the data structure should facilitate the building decomposition and combination. After data extraction, the data should be able to stored and managed in a DBMS and can also be convert to other data format easily Data organization for building extraction In order to make a successful 3D building extraction, the data organization must be in accordance the data extraction method and well structured. From Figure 3.8 and Figure 3.14, it can be seen that a tree structure is very suitable for the management and organization of 3D building data. One most important idea is that we can spatially split a complicated building into several buildings units according to the base buildings. A base building of one unit with all the buildings on its top will be regarded as a composite building. For example, the complex building in Figure 3.7 can be reorganized using three buildings: building 1, building 2 and building 3 as shown in Figure 3.15 (a). Each building will be given a number so that it can be identified easily in a programming language. Then the data structure of this complex building can be represented using a tree structure shown in Figure 3.15 (b). According to the analysis above, a hierarchical general tree data structure of the presented 3D building model can be derived as shown in Figure The 3D building models may consist of a lot 32

45 of buildings, including buildings and composite buildings, and every building has one base building. All the buildings on top of the base building will be added to the data structure hierarchical according to the different levels using the bottom-up method. The data structure is very important for data extraction. The order of the building extraction should follow the data structure presented here. In Section , a real example of the implantation of this data organization is described. 3D building Model A4 A3 Building 1 Building 2 Building 3 A2 A1 B1 C1 A 1 B 1 C1 Building 1 Building 2 Building 3 A 2 A 3 A4 (a) Building split into 3 building units (b) Data organization of the building in Figure 3.7 Figure 3.15: An example data organization for building extraction 3D building Models Building 1 Building 2 Building n Base building Base building Base building Building Building Building Data storage Figure 3.16: General data structure for building extraction Since now the geo-dbms has become very popular in the practical use with the advantage of integrating and maintaining spatial and non-spatial information in one DBMS environment; and as a future trend, DBMSs will be playing the central role in the new generation 3D GIS, the data of the presented building model will also need to be able to be stored in DBMS so as to keep abreast of the technical trends. Although now 3D geometrical model cannot be fully supported by DBMS, however, 3D objects can be stored using 3D polygons (Stoter, 2004). This is just suited for the object-oriented 3D building model very well because buildings extracted according to the proposed building models are just bounded by a set of 3D roof face polygons, wall polygons and floor polygons. Along with the object-oriented 3D building model, this thesis provides two kinds of storage methods in terms of primitive buildings and generic buildings. The two methods can be easily implemented by both file-based data storage and DBMS based data storage. 33

46 Generic storage of buildings In general, every building can be stored as a polyhedron using 3D polygons as shown in Figure And furthermore, it can be implemented in two ways: using a set of polygons or using multipolygon (Stoter, 2004). Multipolygon is an object consisting several polygons. 3D Building Polyhedron 1.. * 3.. * 1.. * Floor 3D polygon Wall 3D polygon Roof Roof face 1.. * 3D polygon Storage of primitive buildings Figure 3.17: Generic storage of 3D buildings For primitive buildings, because they have fixed geometry and topology the data storage can be described as in Figure The 3D primitive building is defined by storing the ordered nodes (X,Y,Z) explicitly. For every floor, wall and roof face polygon, it only needs to store the IDs of those nodes defining it. One advantage of this storage is one node only is stored once. It is more like a boundary representation and internal topology can be maintained. 3D Primitive Building Polyhedron * 1.. * Floor Nodelist Wall Nodelist Roof face Nodelist 3.. * <<ordered>> Node 3.. * 3.. * X Y Z Figure 3.18: Primitive buildings storage 34

47 A data storage and collection rule The nodes (or vertices) of outer boundaries should be ordered anti-clockwise, as seen from the outside of a building. On the contrary, the nodes (or vertices) of inner boundaries should be ordered clockwise. Foe example, in Figure 3.19, there is a cavity in the planar roof building and the roof face polygon has two boundaries. In this case, the order of nodes of the inner boundary is clockwise so as to represent the cavity Figure 3.19: Data collection and storage rule The rule of the node order of a polygon should also be followed when collecting the roof edges in the images Summary In this chapter, based on the knowledge and experiences gained in Chapter 2 and using UML, an object-oriented building model is developed with the necessary aspects for 3D building extraction and data management towards a future object-oriented database management system. In the presented object-oriented building model, buildings are regarded as objects and each is composed of three building parts: roof, wall and floor. By taking the roof as the major part, a composite building can be composed into primitive buildings. This is very much in accordance with the data collection method using aerial imagery and ALS points. By defining building relations and primitive buildings, a complex building can be easily decomposed; and the decomposition is very flexible. By defining the three building operations, it provides a very promising way to construct and represent complex buildings correctly. Combining the building decomposition and combination methods will make it possible to extract buildings even in very complicated scenes. It can be expected that the presented object-oriented building model is capable for 3D building extraction and furthermore improve the process. In next chapter, the object-oriented building model will be integrated with the ALS points and aerial imagery and consequently the object-oriented model based method and workflow for 3D building extraction will be worked out. 35

48

49 4. Method design This chapter designs the method for 3D building extraction by integrating the object-oriented building model, ALS points and aerial imagery. The related aspects of the method including the input data, how to integrate ALS points and aerial imagery for roof recognition and reconstruction, how to combine the object-oriented model with the data extraction are addressed and presented. Finally a workflow for object-oriented model based 3D building extraction using ALS points and aerial imagery is developed and proposed. In the meantime, the implementation method of the workflow is also worked out and the implementation platform is chosen Design aspects Design goal The goal is to develop a practical and operational workflow for accurate and reliable 3D building data extraction by integrating the 2D edges extracted from aerial imagery and 3D roof faces generated from ALS data points with these characteristics: Incorporated with the object-oriented 3D building model Hybrid different automation levels Input data In order to make the proposed method more generic, these datasets with less specific information were assumed and selected as the input data of the workflow to be proposed: Aerial imagery, including conventional frame aerial photographs and ordinary digital camera images taken by a projective camera mounted on an airplane or helicopter. The data format can be any of the popular digital image formats, such as TIFF, JPG etc. ALS points, the same coverage area as the corresponding aerial imagery with only 3D coordinate information. For generic purpose, the data is in simple ASCII file format contains only the 3D coordinates of the ALS points. Other specific ALS data formats can be converted to ASCII format. This paper provides a program coded by MATLAB for the data conversion from LAS format to ASCII format in Appendix Roof recognition and consruction In this research, roof recognition means the acquisition of correct roof edges, roof faces and their relationships. It is the most important part of the 3D building extraction and also the first joint of the object-oriented building model, ALS point data and aerial imagery. Basically, besides possibly derivation of additional attributes, the building extraction or reconstruction can includes: detection, recognition and geometric reconstruction (Brenner, 2005). It can be seen that the recognition and construction are the core problems of the whole procedure. 37

50 According to the object-oriented building model, roof is the major component and most important for 3D building model construction. Once the roof is extracted, the building is extracted. Meanwhile, for using aerial imagery and ALS data, roof is also the main feature that can be measured directly. Thus the core problems of the building extraction become the roof recognition and construction. Therefore, based on the object-oriented building model the integration of ALS points and aerial imagery will aim at improving the roof recognition and construction so as to achieve reliable and more automatic 3D building extraction. (1) Roof recognition First, in the estimation process of 3D roof faces from the ALS points, the roof shape and structure knowledge perceived from the image will improve the results, especially in complicated scenes. For example, the number of the roof faces learned on the image will help to eliminate those disturbing points or segments and find the correct roof faces. Then, after detecting 2D edges from the image and deriving 3D faces from ALS points, the integration of the two datasets will be made at feature level in two steps: By projecting and grouping the convex hulls of 3D roof faces that belong to a roof onto the image, a 2D roof convex hull can be obtained; and the edges of the roof convex hull will provide the most possible and approximate locations of the real correct roof edges. By estimating the relations between the detected edges and the edges of the convex hull, the real correct edges can be determined and consequently the 2D roof will be recognized. Considering the complicated situation and the complexity that the roof could be, hybrid automation level roof recognition will be more practical and reliable. Camera projective centre Accurate edge of 2D roof 3D ridge by intersecting two faces Intersection of view plane with roof face plane Roof face planes Accurate 3D roof Accurate 3D Building Model Figure 4.1: 3D roof construction by integrating 2D edges and 3D roof faces 38

51 (2) Roof construction See Figure 4.1, the accurate 3D building roof will be formed using accurate 3D ridges generated by intersecting roof planes and 3D accurate edges generated by intersecting view planes through the 2D edges and the 3D roof face planes. And finally the accurate 3D building data will be extracted and building model can be reconstructed. The facts are: the 3D ridges calculated by intersecting the two adjacent roof planes are with both high planimetric and vertical accuracy; and the planes in the point clouds will usually make a good intersection angle with the view planes of edges in the images. Then the 3D edges can be obtained by intersecting the view planes and the roof planes, and the 3D roof planes generated from point clouds will assure good height accuracy whereas the view planes through the 2D edges will be responsible for the planimetric accuracy Object-oriented model based approach As shown in Figure 4.2, for using ALS points and aerial imagery, the proposed object-oriented model (in green) can be combined in the data extraction procedure of the 3D building (in yellow) and form a model based approach: Conceptual model Aerial image 2D Roof edges ALS points 3D Roof faces 1 Building Decomposition Roof recognition 2 Primitive buildings Building operations and combination Building construction Model refinement 3 Data structure/ storage Visualization and storage 4 3D building model Object-oriented building model 3D building extraction process Figure 4.2: Object-oriented model based approach for 3D building extraction using ALS points and aerial imagery 1. First, the conceptual model will provide an indication on what kind of object data need to be derived from ALS points and aerial images. Here, according to the intrinsic characteristics of aerial imagery, ALS points and the conceptual model, the concept and all the definitions of the object-oriented model start to join in the process based on roofs. Then in order to extract roofs, 39

52 the roof edges and roof face planes should be extracted respectively from aerial image and ALS points. 2. The principle and rules of building decomposition and the roof type information of the primitive buildings will be used to assist and improve the integration of 2D roof edges and 3D roof faces for building roof recognition. Every roof has a corresponding primitive building. 3. Then, the building combination method defined in the model will be employed to combine those primitive buildings decomposed in previous step to construct a correct description of a more complex building or a composite building. In the meantime, the three building operations defined in the object-oriented building model will also be employed to refine the extracted building models for a more correct and reasonable representation. 4. In the end, according to the data structure of the object-oriented building model the extracted building data can be visualized and stored in a file or in DBMS so that they can be used for other more applications Workflow Based on the object-oriented model based approach, a workflow for object-oriented model based 3D building extraction using ALS points and aerial imagery can be devised as illustrated in Figure 4.3. It s a complete workflow from the raw input data to the final extracted 3D building models. The whole process can be broken down into two main stages: data preparation (in green) and 3D building extraction (in yellow) Data preparation Preparatory to building extraction, first the camera projection matrix of every image needs to be calculated out, which is a transform matrix from 3D world coordinates to 2D image coordinates. If the camera parameters including both interior and exterior parameters are provided, the projection can be calculated directly using the calculation method described in Appendix 2. While, if the camera parameters are unknown, the methods in (Habib, 2004) and (Delara, 2004) can be employed to calibrate the camera and calculate the projection matrix. In this research, a method of using Imagemodeler to register aerial images with ALS points was developed. The result shows that it is also a promising method and can be improved towards more automatic process. The detailed description of this method can be found in Appendix 3. In the meantime, the ALS points will be converted into ASCII format if they are in other formats D building data extraction As soon as the input data is ready and the projection matrix of every image is calculated, the 3D building data extraction procedure will begin. The whole process consists of six steps: (1) Define working area First of all, the working area needs to be identified. Usually the area of a project is very large and the large amounts of image raster data and ALS point clouds could make some automatic processes fail and waste much time. In order to reduce the data amount, to define a relatively small working area one time and split an image into pieces is a good practical way. After defining the working area, the image and ALS points within this area will be extracted from the input data for the following process. 40

53 In this research, a novel method to define a working area using ALS points and an aerial image is developed (See section 5.1). (2) Identify object building Then, from the working area, the building needs to be extracted and modelled will be identified one by one by drawing a polygon containing it on the image. The building can be either a primitive building or a composite building. Then a cut-off image of the building will be cropped. The ALS points falling in the image area will also be filtered out automatically. To make a cut-off building image and corresponding cut-off ALS points can facilitate the automatic and semi-automatic roof recognition processes by: a) Reducing the detected edge number. b) Improving the correct rate of automatic estimation of the roof face planes by providing the roof type information obtained form the image to the ALS data processing. (3) Automatic edge detection and simplification In this step, automatic edge detection and simplification algorithm will be used to extract 2D edges from the cut-off building image obtained in the previous step. (4) Hybrid automation level roof recognition and construction It is worth noting that although a lot of edge detection methods have been developed, however, the scene and situation could be very complicated due to shadow, low contrast, image resolution, occlusion and the complexity of buildings. Most of the time, the results of automatic edge detection are difficult for automatic 3D building extraction. Therefore, human interactive operation is necessary either for buildings with complicated roof structures or in case that the results of automatic edge detection are not suitable for automatic process. However, by improving the results of the automatic edge detection, the interaction can be made more easily and reduced. Furthermore, semi-automatic process with minimum intervention also becomes feasible. Therefore, in order to make a more practical workflow and at the same improve the automation degree, in this workflow, three roof recognition processes with different automation level can be selected according to the specific situation: (a) Automatic roof recognition For single buildings or buildings with rectangle roof edge polygon and in an uncomplicated scene, if the result of the edge detection is good, for example all the edges are detected, then the automatic roof recognition process can be selected and the roof edges and their corresponding roof faces can be extracted and determined automatically without any intervention. In case the automatic process fails the user has the opportunity to try the semi- automatic roof recognition as below. 41

54 ALS points Aerial imagery Data format conversion Calculate camera projection matrix (1) Define working area (2) Identify object building (3) Edge detection and simplification (4) Select automation level Interactive roof recognition Semi-automatic roof recognition Automatic roof recognition N Successful? N Successful? Y Y Automatic roof construction (5) 3D building construction and visualization Y Refinement? Model refinement N Another building? Y N Another working area? Y (6) N 3D building models VRML DXF Figure 4.3: Workflow for object-oriented model based 3D building extraction using ALS points and aerial imagery 42

55 (b) Semi-automatic roof recognition For single buildings and buildings with rectangle roof edge polygon, if all edges are detected and only one or two edges are difficult for the automatic process, or the automatic roof recognition has failed, then the semi-automatic roof recognition process can be selected. In this process, the operator only needs to identify one or at most two of the correct edges from the difficult part then the roof edges and their corresponding roof faces will be extracted and determined automatically. If the semi-automatic process also fail, a most reliable process: interactive roof recognition will be performed. (c) Interactive roof recognition For buildings with complex roof structures and those can not been extracted using both automatic process and semi-automatic process, Interactive roof recognition process can be selected. In this process, all the correct edges belong to a roof will be selected manually. Therefore, some interactive functions for editing the edges are needed. It will allow adding edges where the edges are not detected and modify those detected edges when they are not accurate. This will provide the most reliable and capable process for most building roofs in any case. After selecting the correct edges, their corresponding roof faces will also be extracted and determined automatically. It can be seen that no matter which process is chosen the relations between 2D roof edges and 3D roof faces can be determined automatically. The difference only lies in the selection of correct roof edges. And also, no matter which process is chosen, once the relations have been determined the 3D roof can be constructed automatically in the roof construction process. (5) Building visualization and model refinement After the extraction of the building roof, the 3D building will be constructed and visualized automatically. For a composite building, the combination process will be performed and the building model will be refined using the three building operations. At the same time, the 3D building models will also be refined for a more logical and correct representation. After finishing the extraction of one building, the operator can either select to continue to extract another building in the same working area or select another working area to extract the buildings of other areas. (6) 3D building storage and data export In the end, the extracted 3D building will be represented and saved in a proprietary format. Furthermore, in order to make data exchange with other software, the 3D building data can be exported and transferred into other popular 3D data formats, for example VRML and DXF, so that the data can be used by other systems Implementation platform MATLAB is chosen as the main implementation platform and programming tool to implement the proposed workflow because it is very strong in matrix computation, image data processing, developing algorithms and applications as well as visualizing data; and above all, it s easy to use and learn. Also the SUGR (SUGR website), a library for Statistical Uncertain Geometric Reasoning in Java is selected for the calculation of join and intersection with uncertainty and testing uncertain geometric 43

56 relations. The SUGR is very suitable for and capable of representing and estimating projective camera transformation, representing, combining and estimating uncertain points, lines and planes both in 2D and 3D and estimating geometric relationship (Heuel, 2004). The more import thing is that we can instantiate and manipulate Java classes in MATLAB. Therefore the combination of MATLAB and SUGR is very suitable for the implementation work. Besides, the surface growing segmentation function of PCM (Point Cloud Mapper), an existing interactive program developed by Prof. George Vosselman, is used to obtain the segmentation results of the ALS points during the implementation. Based on these development tools, in next chapter, the work flow for object-oriented model based 3D building extraction by integrating ALS points and aerial imagery will be implemented completely. 44

57 5. Implementation of 3D building extraction In this chapter, the proposed workflow for objected-oriented model based 3D building extraction is implemented. All the implementation aspects and algorithms are presented. In the meantime, the 3D data extraction and building model reconstruction of four different type test buildings are demonstrated. By grouping the program codes, in the end a prototype production system is developed; and using it 3D building models of a small test area are extracted and reconstructed Define working area Usually, the amount of the data of a project is very huge. For example, a scanned aerial photograph in TIF format with a pixel size of 14 microns is more than 800 MB and ALS data of only one square kilometre with a measurement density of points per square meter will have 10,000,000-20,000,000 points. The large data amount of image will cause problem and make the some automatic processes fail. Therefore, the ability to split the project area into pieces and allow the user to define a specific area for working is very useful and practical. An aerial image area (1) Select an area on image 3D ALS data area Bounding box (2) (3) Crop (2) (1) (3) Crop Average elevation plane of ALS data Working image Working 3D ALS points Figure 5.1: Illustration of the definition of a working area This thesis presents a novel method to integrate the ALS points and single aerial image for the definition of a working area. The approach consists of three steps (see Figure 5.1): (1) First, select an area on an aerial image by drawing a rectangle containing all buildings need to be extracted like the red dash area in the figure. This area does not need to be accurate and just an approximate area. At the same time, by reading the ALS points in this area we can make an average of the elevations of the area and set up a level plane with the average elevation. 45

58 (2) Then, the rectangle will be projected onto the average elevation plane and form a 3D polygon (the solid blue polygon) with the same Z value as the average elevation plane. Then a 3D bounding box (the dash blue polygon) of this polygon can be derived accordingly. (3) The ALS points within the bounding box will be cropped from the original ALS data. Also, the working image will also be cropped from the image according to the selection area. Then the buildings within the working area should be extracted and reconstructed by identifying them one by one Identify object building This step is to identify and select a building that needs to be extracted. This building can be either a primitive building or a composite building. The selection method is: On the working image, draw a polygon that just containing the object building to define a building area. The polygon should not be too big, only just a little bigger than the bounding box of the building. Then the image of this area will be cut off to make a cut-off building image. The working ALS points within this area will also be cut off to make the cut-off ALS point dataset. The building type will be given and stored in order to be automatically used to facilitate the roof recognition process later. Figure 5.2 shows two examples of cut-off building image. (a) A target composite building with 2 planar roofs (b) A target gable roof building Figure 5.2: Examples of cut-off building image At the same time, the cut-off ALS points will be automatically separated into two parts: roof-level ALS points and ground-level ALS points by performing the process below: (1) From the 3D cut-off ALS points, find the minimum Z value. (2) Assume that the height of eaves of a building should be more than 3 meters from the ground, then points higher than Z+3 will be put into roof-level ALS points (Note that this value can be set according to the specific area). (3) Points with an elevation lower than Z+3 will be ground-level ALS points. Roof-level ALS points will be used for roof face estimation and ground-level ALS points will be used for the determination of building base elevation Automatic edge extraction and simplification Edge extraction Edges or lines can be detected automatically by marking the points in the cut-off aerial image at which the luminous intensity changes sharply. 46

59 Currently, the Canny operator is the most commonly used edge detection method. It has shown significant advantages in general situations. Therefore, in this research the Canny algorithm is employed to extract building edges from aerial image. (a) Edge map of Figure 5.2(a) (b) Edge map of Figure 5.2(b) Figure 5.3: Edge extraction results using Canny Figure 5.3 shows the resulted edge maps of the two cut-off building images in Figure 5.2 using Canny. It can be seen from the two edge images that, There are too many edge points. Some edges cannot be detected (red circle). Due to the colour and structure of the eaves, there are several double edges detected (yellow circle) on both of the edge maps. These make it very hard to find the correct building edges. Therefore some more processes need to be done to simplify the situation Edge linking and simplification According to (Kovesi, 2006), this process consists of 3 steps: (1) Edge linking The edge points on the two binary edge images in Figure 5.3 will be linked together into chains or contours and only connected points will be linked. In the meantime, a minimum contour length of interest can be set and contours less than this value will be discarded. (2) Form line segments After edge linking, the edge points are linked to contours but they are not lines. Based on the contours, line segmentation will be performed to form straight line segments using a specified tolerance: maximum deviation. A segment with a deviation from the original edge more than the maximum deviation value will be broken into two. (3) Merge edges In the end, if the orientation difference between two line segments is less than a specified angle tolerance and at the same time, the maximum distance between the end points of them is less than a threshold then the two line segments will be merged into one line. Figure 5.4 shows the results after the edge linking and simplification of Figure 5.3. The parameters set for Figure 5.4(a) and (c) are: minimum contour length is 20 pixels, maximum deviation is 2 pixels, 47

60 maximum angle difference is 0.05 radians and maximum distance threshold is 2 pixels. The only difference of the parameters set for Figure 5.4(b) and (d) is that minimum contour length is 40 pixels. It can be seen that in Figure 5.4(b) and (d) almost all the edge lines detected are remained and furthermore it has less useless lines. Further steps will be based on these two images. (a) Minimum contour length is 20 pixels (b) Minimum contour length is 40 pixels (c) Minimum contour length is 20 pixels (d) Minimum contour length is 40 pixels Figure 5.4: Edges after edge linking and simplification 5.4. Interactive roof recognition As mentioned in Section 4.4.2, the interactive roof recognition means that the correct edges of a roof will be collected interactively and manually. However, the relations between the 2D edges and the 3D roof faces are determined automatically. Therefore, it mainly consists of three processes: interactive roof collection, automatic roof face plane estimation and determination of the relations between 2D edges and 3D roof faces. Generally, as the most reliable one the interactive roof recognition is applicable to the most cases, especially in complicated scenes and for buildings with complex structures. Even if the edge detection can not give a good result, the interactive roof recognition still can work in virtue of the interactive functions it has. 48

61 Interactive 2D roof edge collection Interactive functions In this step, 2D roof polygons will be formed interactively by an operator. The minimum interactive functions implemented include: Select edges, which allows an operator to click on or near an edge to select it. Add edges, where edges cannot be detected they can be added by the operator to draw a line along the edges on the image. Delete edges, some wrong edges or edges disturbing the collection can be deleted Collection rules According to the object-oriented building model, some rules for roof collection need to be made: (1) Outer boundary polygon of a roof should be collected anti-clockwise and inner boundary polygon should be collected clockwise. (2) If a complex building has more than one base building, it should be divided into several composite buildings according to the base building. One base building with all the building on its top will be regarded as one composite building. The composite building with lower base building should be collected first. (3) For a composite building with one base building and more buildings on the base building, the order for the collection is bottom-up level by level according to the height of the roofs and the relation of the buildings (refer to Section for reference) Roof collection examples For the gable roof in Figure 5.4(b), the roof collection is illustrated in Figure 5.5 (a). The edges (in red) that form the roof edge polygon are selected one by one. The red numbers indicate the collection order is anti-clockwise. It is possible that there are many edge segments along an edge on the image and only the most accurate one should be selected. After finishing the sixth edge the 2D roof edge polygon will be generated automatically as shown in Figure 5.5(b). After forming the roof polygon, a roof type code should also be given and stored with the polygon so that it will facilitate and improve the estimation and derivation of the roof faces from ALS points in Section It is worth noting that for a gable roof, if the projection is vertical there should be 4 roof edges. For the building in Figure 5.4(d), because there is an edge segment failed to be detected, first we need to add an edge (the red line) as shown in Figure 5.5(c). Then, according to the building decomposition rules of the object-oriented building model, this is a composite building with two planar roof buildings. Each is a base building. Because the bigger building is lower than the small one, it will de collected first. Finally, there are two building roofs are formed as shown in figure 5.5(d). Generally, if there are N base buildings, the building should be numbered form Building1 to BuildingN. If there are N buildings on a base building, the roofs of these buildings will be numbered from Roof1 to RoofN according to the collection rule (3) in section

62 (a) Interactive roof edge selection (b) 2D roof of the gable roof (c) Add an edge on the image (d) 2D roofs of the composite building Figure 5.5: Examples of interactively 2D roof collection Select ALS points belong to building roofs After forming 2D roof polygons, the 3D ALS points falling into them can be filter out by three steps: (1) Project the roof-level ALS points to the cut-off building image and obtain 2D ALS points. (2) 2D ALS points falling in a roof will be selected using the 2D roof polygon. (3) Then the corresponding 3D ALS points belong to the roof will be selected. Here, one most important rule is that the higher roof takes precedence of the lower roof. That is, the order of selecting ALS points is just in the reverse order of the roof polygon extraction. Therefore, the order should be: 1) from BuildingN to Builiding1, 2) for one building, from RoofN to Roof1. The advantage of this order is: it can avoid selecting those ALS points belong to an upper roof. Figure 5.6 shows the results of the selection of ALS points belong to the two building roofs. The points belong to the small roof was selected first because it is higher than the big one. Thus, the points belong to the two different buildings could be selected separately and the points in the intersecting part of the two roofs will belong to the higher small roof. The selected 2D and 3D ALS points within the gable roof are shown in Figure 5.7 (the pink points) and Figure 5.8(a). 50

63 Figure 5.6: Select ALS points within roofs Figure 5.7: Determine building base elevation Obtain the building base elevation The base elevation of a building is very important for the 3D building reconstruction. It can be determined by: (1) Extent the roof polygon of the base building 3-5 meters and form an extended base roof polygon (the blue polygon in Figure 5.7). (2) Select the 3D ALS points from the ground level points between the two polygons (the blue points in Figure 5.7), most of these points should be on the ground adjacent to the building. (3) Obtain the lowest ALS point the Z value of this point will be taken as the building base elevation. It is worth noting that for a building, either a single building or a composite building, there is only one building base elevation. For a composite building with more than one base building, the above process should be only performed on the lowest base building Obtain 3D roof faces and face planes This step aims to obtain the roof faces and their plane equations based on the roof level 3D ALS points within the roof Plane growing segmentation Surface growing segmentation is one of the major algorithms to segment ALS point clouds into smooth surfaces. Because the building model proposed in this research is polyhedral, we only concentrated on planar surfaces, namely plane growing. The PCM program (see Section 4.5) is employed to perform the segmentation. The process consists of five steps: (1) Seed planes are selected by testing if the height of points within some distance of a point can be approximated by a plane. (2) The seed planes are extended with other points adjacent to the plane that have a short perpendicular distance to the plane. And if adding points to the plane, the plane parameters will be updated. (3) The growing of a plane continues until no further adjacent points are found at a short perpendicular distance to the surface. (4) The seed planes are processed one after another until all points have been assigned to a plane number. 51

64 (5) Planes with a lower number of points than a specific value will be eliminated. Usually, there are two parameters are very important for achieving good results: plane growing radius and maximum distance to plane, which can be adjusted according to specific situations. In Figure 5.8, (a) is the 3D ALS points within the gable roof polygon selected in previous step. It can be seen some points on the wall also were selected but they can be automatically removed later. From (b) to (e), different results with different plane growing parameters are shown. Among them, (b) has too many segments and there is a wrong segment (the brown segment) in (c). Both of them cannot be used for further process. In (d), the number of segments is reduced and it has one main segment and some bigger segments. It can be used for the following calculation. (e) has two main segments just in accordance with the gable shape and therefore is the best. In general, if the segmentation result contains all main face segments and without too many other segments and wrong segments like (d), it can be used in the next process. (a)selected ALS points within the roof (b)radius=0.3 m Maximum distance=0.3m (c)radius=0.4 m Maximum distance=0.3m (d)radius=0.4 m Maximum distance=0.15m (e)radius=0.5 m Maximum distance=0.15m (f) (e) with small segments removed Figure 5.8: Results of plane growing segmentation with different parameters Estimate and calculate face planes Normally, after segmentation the number of segments is more than the number of the faces that the roof has just like in Figure 5.8(d). It is still difficult to determine the correct roof face planes. Segments that should be coplanar need to be merged into one segment. Therefore, further estimation of the face plane is necessary. The algorithm for optimally estimating geometric entities using weighted least squares presented in (Heuel, 2004) is employed and combined to the estimation process to calculate the best fit plane and estimate the coplanarity of two segments or planes. Compared with least squares, each term in the 52

65 weighted least squares criterion includes an additional weight that determines how much each observation in the data set influences the final parameter; and the weights come from the covariance matrices. The whole estimation algorithm is: (1) Group ALS points that belong to the same segment. (2) Estimate the best fit plane for every group of ALS points and obtain a set of planes. (3) Estimate the coplanarity of every two planes and if they are coplanar the points of the two plane will be combined and the best fit plane equation will be re-estimated. Then the new plane will be added to the estimation iteration and the original two plane equations will be removed. (4) The iteration of step (3) will continue until no more new plane can be added and the coplanarity of every two planes has been estimated. (5) If the final number of roof face planes calculated is still more than the roof face number, then the roof type information obtained form the image will help to determine the correct roof face number. The two processes below can be performed either or both: a) Planes with a lower number of points than a specific value will be eliminated. b) If the roof face number is N, then the first N planes with most points will be selected as the face planes. An example using the above estimation algorithm is shown in Figure 5.9. In Figure 5.8(d), there are 31 segments; and after the estimation there are just two plane segments left as shown in Figure 5.9(a). Then the two best fit face planes can be calculated as shown in Figure 5.9(b). (a) Two segments left after estimating Figure 5.8(d) (b) The best fit roof face planes Figure 5.9: Estimation and calculation of roof face plane Determine the relationship of 2D edges and 3D roof faces The objective of this step is to determine to which roof face a roof edge belongs. If a roof has more than one roof faces, in order to intersect the view plane through the edge with the correct roof face plane the corresponding roof face of a roof edge need to be determined firstly. Therefore, for singleface roof, for example the planar roof and shed roof, this step is not needed because it has only one roof face and all the edges belong to it. 53

66 The objective can be achieved by two steps: derive 2D convex hull of every roof faces from the ALS points belong to them and then estimate the relationship between the roof edges and the edges of the convex hulls. Figure 5.10 shows the two 2D roof face convex hulls of the gable roof. The solid red lines are the roof edges and the dashed blue lines are the convex hull edges. Figure 5.10: Roof face convex hulls Convex hull edge d2 d1 Roof edge Figure 5.11: Estimation of relationship of roof edges and convex hull edges The criteria for determining that a roof edge belongs to a roof face is: the distance between the roof edge and any edge of the roof face convex hull is less than a specific value, for example 2 pixels and at the same time the angle between the two edges is less than a threshold, for example, 2 degrees. The distance between a roof edge and a roof face convex hull edge is determined as below (see Figure 5.11 for reference): (1) Calculate the line equations of both edges. (2) If the two lines intersect and both edges include the point of intersection, then the distance is zero. (3) Otherwise, calculate separately the distance form the end points d1, d2 to the roof edge and the minimum will be the distance between the two edges. Using the criteria above, now we can know with which roof face the view plane through an edge will intersect. Then, the 3D roof can be constructed in section

67 5.5. Semi-automatic roof recognition If the roof edge polygon of a building is a rectangle and all the edges can be detected, then the interactive work can be reduced and the building roof can be recognized semi-automatically. The idea is: for a rectangle roof, if one or two edges are identified then it is possible to form the whole rectangle by automatically selecting the other edges. Here, a semi-automatic roof recognition process by only identifying one correct edge is implemented. A 1 1 B (a) A hip roof with detected edges (b) Identify one edge (c) Join the collinear edges A B (d) Determine the two edges vertical to 1 (e) Determine the last edges (f) Form roof polygon Figure 5.12: Semi-automatic roof recognition of a hip roof Figure 5.12(a) shows a hip roof building with the detected and simplified edges. It can be seen that the most right edge of the building has been detected as many double edges. Then a general semiautomatic roof recognition procedure can be performed on this building as below: (1) If the number of the detected lines is large, then specify a value and edges shorter than it will be removed as seen in Figure 5.12(b). (2) Select and identify one correctly detected edge from those double detected edges as Edge 1 as shown in red in Figure 5.12(b). (3) Automatically search if there are some edges that are collinear with Edge1, if there are then join them into one line with the end point A and B as shown in Figure 5.12(c). 55

68 (4) Those edges within a specified distance, for example 10 pixels, and parallel to Edge 1 will be removed as shown in Figure 5.12(d). (5) From the remained edges, the one that is perpendicular to Edge 1 and has the minimum distance from A to it will be selected as Edge 2. In the meantime, the one that is perpendicular to Edge 1 and has the minimum distance from B to it will be select as Edge 3. The three edges are shown in Figure 5.12(d). (6) From the Edge 2 and Edge 3, select the longer one. In this case, it s Edge 2. (7) From the remained edges, select the longest one that is perpendicular to Edge 1 and within a threshold distance from Edge 2 to it. Name it as Edge 4 as shown in Figure 5.12(e). (8) Link the four edges anti-clockwise to form the roof polygon as shown in Figure 5.12(f). After forming the roof polygon, the ALS points falling in the roof will be selected and the roof face planes will be estimated using the same algorithm as in Section Then, the relationships between 2D roof edges and 3D roof faces will be determined using the method described in Section Based on those relations, the 3D roof can be constructed as described in Section 5.7. And finally, the 3D building will be reconstructed and visualized as shown in Figure 5.26(b) Automatic roof recognition As mentioned in Section 4.2, by using 2D edges extracted from image and 3D roof faces derived from ALS points, it provides a possible way to implement the roof recognition automatically, especially for single buildings or primitive buildings with rectangle roof edge polygon. The idea is: from ALS points belong to a roof the convex hull can be derived; and by projecting it onto the image a 2D convex hull polygon can be obtained. Then by applying the constraint of the rectangle on the 2D convex hull polygon, a convex hull rectangle can be generated. It will provide a clue and possible locations of the potential correct roof edges. Then by estimating the relations of the detected edges and the edges of the convex hull rectangle the correct edges can be extracted. Compared with the interactive and semi-automatic roof recognition, the main difference is that in automatic roof recognition the 2D roof edge polygon will be formed automatically and without any human intervention. Another difference is the process order. The ordered main processes of the automatic roof recognition are: automatic roof face estimation, automatic 2D roof extraction and automatic determination of relations between 2D roof edges and 3D roof faces. Taking the building in Figure 5.13 (a) as an example, after edge detection and simplification the detected edges are shown in Figure 5.13 (b). Using the roof-level ALS points, the same process as described in Section is performed to estimate the roof face planes and then 3D roof faces, face plane equations and ALS points that belong to the roof are obtained. Then based on the detected edges and the ALS points that belong to the building roof, the process for the automatic 2D roof recognition is performed in seven steps: (1) Calculate 2D convex hull of all ALS points belong to the roof. 56

69 Figure 5.4 shows the convex hull (dashed red line) of the planar roof building calculated by using the ALS points that belong to it. (a) A planar roof building (b) Detected edges Figure 5.13: A planar roof building used for automatic 2D roof extraction (a) 2D convex hull (b) Simplified 2D convex hull Figure 5.14: 2D convex hull and simplification (2) Simplify 2D convex hull Usually, the convex hull has more edges than the number of roof edges. For example, in this case, the convex hull has 23 edges. This will make the automatic process very hard. Therefore, the convex hull needs to be simplified. The algorithm is: a) Select the longest edge from the convex hull and name it as 1 as shown in Figure 5.14(a). b) Select the longest edge from the edges perpendicular to 1 within a threshold, for example one degree, and name it as 2. c) Select the longest edge from the rest edges that have the same direction as 1 within the threshold with a distance more than 20 pixels away from 1. Name it as 3. d) Select the longest edge from the rest edges that have the same direction as 2 within the threshold with a distance more than 20 pixels away from 2. Name it as 4. e) Link the four edges and form a polygon then make the vertices of the polygon anticlockwise. The polygon is the simplified convex hull. (3) Create a buffer around the convex hull 57

70 Usually, there will be so many edges detected on the image. Although in this example case, it seems that there are not too many edges. However, there are 189 edges. The fact is that the correct edges are most possibly near the convex hull. Therefore, a buffer of the convex hull needs to be created firstly. The distance can be given approximately according to the image resolution and the space between the ALS points. Here, the distance is given the value of 6 pixels, which is about 0.5 meter in real distance (The resolution of image is about 0.075m).Figure 5.15 shows the created buffer (within red lines). (4) Select edges within the buffer Then only those edges completely within the buffer will be selected as candidate edges (see Figure 5.16). Other edges are removed Figure 5.15: Convex hull buffer Figure 5.16: Edges within buffer (5)Determine roof edges Compare every edge of the simplified convex hull, from 1 to 4, with those candidate edges, if a candidate edge is parallel to the convex hull edge and within the buffer of it then it will be one possible edge. If a convex hull edge has many possible edges then the criteria are: If there is a longest edge and its length is much longer than others, then it will be selected as the correct edges. If there are two or more edges have almost the same length and are much longer than others then the one closest to the convex hull edge will be selected as the correct edge. (6) Form 2D roof polygon After all the four edges are determined, they will be linked anti-clockwise to form the 2D roof. Figure 5.17(a) shows the extracted 2D roof polygon. (7) Automatic adjustment One problem can be seen in Figure 5.17(a) that the lower edge 3 is not accurate and coincident with the corresponding image edge. One possible reason is that the trees have affected the edge detection. Therefore, sometimes if needed an automatic adjustment of the extracted roof polygon is needed. The process is: From the final four edges determined in step (5), select the longest edge as the dominant edge. Here, it s edge 1. 58

71 Then adjust other edges to make them perpendicular or parallel to the dominant edge. Here, edge 2 and 4 should be perpendicular to edge 1; and edge 3 should be parallel to edge 1. When adjusting an edge, it will rotate an angle around the central point. Figure 5.17(b) shows the adjusted result of the 2D roof polygon and Figure 5.26(a) shows the final 3D model of this building (a) Automatically extracted 2D roof (b) After adjustment Figure 5.17: Automatically extracted 2D roof and adjustment After the automatic roof polygon extraction, the relations of the 2D roof edges and the 3D roof faces can be determined using the same method as described in Section Then based on the relations in Section 5.7 the 3D roof will be constructed Automatic roof construction The 3D roof construction is based on the two facts: the roof edges are ordered anti-clockwise and the structure of a primitive building is relatively fixed. This makes every primitive building roof can be constructed automatically. Here, this thesis provides three algorithms for planar roof, gable roof and hip roof construction. For other primitive buildings the idea and principle are the same as these 3 kinds of buildings. (1) Single-face roof For a single-face roof, for example the planar roof, no matter how many edges it has the 2D edge polygon extracted from the image can be easily backprojected onto the face plane as shown in Figure This can be done by two ways: a) backproject every vertex onto the face plane then link them to form the 3D roof. b) intersect every view plane through the edges with the face plane and get a set of ordered 3D lines. Then, intersect these lines and form the 3D roof. 2D edge polygon p4 p5 5 4 Ridge Face2 3 p3 p2 3D planar roof Face plane p6 6 1 Face1 2 p1 Figure 5.18: 3D single-face roof construction Figure 5.19: 3D gable roof construction with 59

72 (2) Gable roof The automatic algorithm to construct a gable roof is (see Figure 5.19 for reference): (1) Intersect the two roof faces and obtain the ridge line. (2) Select one of the edges parallel with the ridge line and number it as Edge 1 and the number of the other edges will be given consecutively and anti-clockwise. (3) Estimate and find the corresponding face of edge 1 using the estimation method described in Section and name it as Face 1. Consequently, the other face will be Face 2. (4) Intersect every view plane through Edge 1, 2, and 6 with Face 1 and obtain 3D Edge line 1, 2 and 6. Also, intersect every view plane through Edge 3, 4, and 5 with Face 2 and obtain 3D Edge line 3, 4 and 5. (5) Intersect 3D Edge line 1 with 2 and obtain 3D point p1. Intersect 3D Edge line 3 with 4 and obtain 3D point p3. Intersect 3D Edge line 4 with 5 and obtain 3D point p4. Intersect 3D Edge line 6 with 1 and obtain 3D point p6. (6) Intersect 3D Edge line 2 with 3, Edge line 2 with ridge and Edge line 3 with ridge then obtain three 3D points. The average value of the 3 points will be given to p2. (7) Intersect 3D Edge line 5 with 6, Edge line 5 with ridge and Edge line 6 with ridge then also obtain three 3D points. The average value of the 3 points will be given to p5. (8) Thus the 3D gable roof is constructed with: 3D roof polygon= {p1, p2, p3, p4, p5, p6}, 3D face1= {p1, p2, p5, p6}, 3D face2= {p2, p3, p4, p5} Figure 5.20 shows a result of the constructed 3D roof of the gable roof building using the algorithm above. The red solid lines are the 3D ridge and 3D edges generated by intersecting the view plane through the edges of the roof edge polygon and the two colour meshes are the roof face planes. Figure 5.20: The result of constructed 3D roof of the gable roof building In case that the 2D gable roof edge polygon is collected and formed by 4 or 5 edges, for example, in Figure 5.19 Edge 5 and 6 may be one line or Edge 2 and 3 may be one line or both of them are one lines, then the only difference of the construction is that in step (4), the view plane through the edge should intersect with Face 1and Face 2 respectively. The other steps will keep the same. 60

73 (3) Hip roof The characteristics of a hip roof are: there are two longer edges and two shorter edges faces, two bigger faces and two smaller faces and one edge belongs to one face. These characteristics will be used in the construction. The automatic algorithm to construct a gable roof is (see figure 5.21 for reference): (1) From the 4 edges of the 2D roof edge polygon select one of the longer edges as edge 1. Then the other edges will be numbered consecutively and anti-clockwise. (2) Estimate and find the corresponding face of Edge 1 using the estimation method described in section and name it as Face 1. The other bigger one will be named as Face 3. (3) Estimate and find the corresponding face of Edge 2 and name it as Face 2. The other smaller one will be named as Face 4. (4) Intersect every view plane through the edges and their corresponding 3D face planes and obtain 4 3D edge lines. (5) Intersect 3D Line 1 with 2, 2 with 3, 3 with 4 and 4 with 1 and obtain 3D points from p1 to p4. (6) Intersect 3D Face1, 2, 3 and obtain p5; intersect 3D Face1, 3, 4 and obtain p6. (7) Thus the hip roof is constructed with: 3D roof polygon= {p1, p2, p3, p4}, 3D face1= {p1, p5, p6, p4}, 3D face2= {p1, p2, p5}, 3D face3= {p2, p3, p6, p5}, 3D face4= {p3, p4, p6} p3 3 p2 Face3 4 Face4 p6 p5 Face2 2 Face1 p4 1 p1 Figure 5.21: Hip roof construction D building construction and Visualization D building construction Building construction means that based on the 3D roof created in previous step the data of the other building parts including walls and floor will be generated automatically and consequently the complete building will be reconstructed. As shown in Figure 5.22, taking the single-face roof building for an example, according to the definition of the object-oriented building model the generic 3D building construction process consists of three steps: (1) Project the ordered roof edge polygon onto the ground or a roof on which the building is located and obtain an ordered polygon. This polygon is the floor polygon. 61

74 (2) Link a roof edge and its corresponding floor edge in anti-clockwise order then obtain a wall polygon. (3) Repeat (2) until all the roof edges and their corresponding floor edges are linked. Then, all the walls will be constructed. 3 2 Roof 2 4 Roof 1 Wall Wall Floor 5 10 Floor Ground or a roof Ground or a roof (a) Single-face roof building (b) Gable roof building Figure 5.22: 3D building construction examples For the gable roof building, if the polygon P= {6, 5, 11, 12} and P = {5, 4, 10, 11} in Figure 5.22(b) should be coplanar then the line {5, 11} will be deleted and the two polygons will be adjusted and merged into one. So does the other side. Then the gable building will be reconstructed as shown in Figure Roof Roof Wall Floor 10 Ground or a roof Wall Floor 7 Ground or a roof Figure 5.23: Adjusted model of Figure 5.22(b) Figure 5.24: Hip roof building construction For hip roof buildings, based on Figure 5.21 and using the above generic 3D building construction process, the walls and floor can be obtained and the whole building will be constructed as shown in Figure Building model visualization According to the object-oriented building model, every building part is a polygon object. Therefore, the visualization of a building can be achieved by representing every building part as a polygon. In MATLAB, the function patch is very suited for creating 3D building models (See MATLAB help or online documents for more references). It is a low-level graphics function for creating patch graphics objects. A patch object is one or more polygons defined by the coordinates of its vertices and a patch graphics object is composed of one or more polygons that may or may not be connected. This is just 62

75 in accordance with the object-oriented building model. Other advantages are: the colourings and lighting of the patch can also be specified and every patch objects can be selected separately. Therefore, to visualize a 3D building model, the generic process is: first all the polygons of roof faces, walls and floors will be formed. And then, the polygons are presented by patch objects. For example, according to Figure 5.22(a) the building parts of a planar roof building with arbitrary roof edge number n can be represented by these polygons: Roof face= {1, 2, 3, 4,, n} Floor = {1+n, 2+n, 3+n, 4+n,, n+n} Wall(1)={1, 1+n, 2+n, 2} Wall(2)={2, 2+n, 3+n, 3} Wall{n-1}={n-1, n-1+n, n+n, n} Wall{n}={n, n+n, 1+n, 1} Figure 5.25 shows the visualization of the composite building combined with two planar roof buildings in Figure 5.2(a) represented using patch objects. Figure 5.27(a) is the visualization of the automatically extracted planar building in section 5.6. Figure 5.25: Visualization of planar roof buildings Figure 5.26: Visualization of gable roof building (a) Planar roof building (b) Hip roof building Figure 5.27: Visualization of the automatically and semi-automatically extracted building 63

76 For the gable roof building in Figure 5.23, the building parts can be represented by these polygons: Roof face(1)= {1, 2, 5, 6} Roof face(2)= {2, 3, 4, 5 } Floor = {7, 8, 9, 10} Wall(1)={1, 7, 8, 3, 2} Wall(2)={3, 8, 9, 4} Wall{3}={6,5, 4,9,10} Wall{4}={1, 6, 10, 7} Figure 5.26 shows the visualization of the gable roof building presented using patch objects. For the hip roof building extracted using semi-automatic process, based on Figure 5.24 the building can be visualized using these polygons as shown in Figure 5.27(b): Roof face(1)= {1, 5, 6, 4} Roof face(2)= {2, 5, 1 } Roof face(1)= {3, 6,5, 2} Roof face(2)= {4, 6, 3 } Floor = {7, 8, 9, 10} Wall(1)={1, 7, 8, 2} Wall(2)={2, 8, 9, 3} Wall{3}={3,9,10,4} Wall{4}={4, 10, 7, 1} For a composite building, the coordinates of the roof face polygons, wall polygons and floor polygons are stored directly with the polygons. When visualizing it, all the polygons of the building parts will be changed and represented to patch objects directly Building model refinement So far, 3D buildings can be extracted, however, in some cases the building data is not correct because of building intersection or not reasonable because sometimes the decomposition method used may cut one building into two or more buildings but in reality it should be one building. For example, if we show the two buildings in Figure 5.23 in wireframe view as shown in Figure 5.28(a), it can be seen that the walls of the two building intersect, which is not reasonable. Referring to the section 3.5.1, we only need to take the horizontal relationships of buildings into consideration when combining and refining object-oriented building models. Therefore, according to the object-oriented building model, there are only two cases in terms of the horizontal relationship between any two buildings: touch or detached. Two buildings cannot intersect each other. Therefore, the building operations of the object-oriented model need to be employed to combine and refine the building models in order to achieve correct and reasonable building representation. 64

77 Building gluing When two individual buildings in reality are touching each other, but because of the errors in the data collection most of the time they do not touch each other exactly. Possibly, there is a small gap between them or they slightly intersect. Then the gluing operation can be used to glue them together. As defined in the building model, gluing is a wall-based operation. The implementation algorithm is: (1) Select two buildings. (2) Select all the walls of each building. (3) Calculate the distance between every wall of one building and every wall of the other building and compare it with a predefined threshold. (4) If the distance is less than the threshold then remember both of the two corresponding walls as a wall pair. Otherwise repeat (3) and (4) until all the distances are calculated and compared with the threshold. (5) For every wall pair, calculate a middle or average vertical plane between them, then project the vertices of the two wall polygons onto the plane and obtain new wall polygon vertices. (6) Form new walls for each building. (7) Update the corresponding roof face edge, roof edge and floor edge of each building. (a) Wireframe view of Figure 5.25 (b) After clip (c) After clip and merge Building merge Figure 5.28: Building refinement examples When performing building decomposition, sometimes one building in reality may be decomposed into two or more buildings. In other words, there should be no wall between the two buildings. In this case, the merge operation should be used to join these buildings and therefore, the basic idea of merge is to delete the common walls between buildings. A A A B B (a) (b) (c) B Figure 5.29: Three cases of using merge 65

78 The implementation algorithm of merge is: (1) If the two walls on which merge operation need to be performed are not coplanar then glue the two walls using the gluing algorithm. (2) See Figure 5.29, suppose the two walls are: A of building 1 and B of building 2. There exist 3 possible cases. (3) In case (a): A contains B, then delete wall B in building 2 and cut B from A. And in this case, if one vertical edge of A and one vertical edge of B is coincident and the two walls connected by this edge are coplanar, then they will be merged into one. In the end, the two floors will also be merged. (4) In case (b): A equals B (usually within a threshold), then delete both A and B from the two building respectively. The two floors will be joined together into one. And if the connected two walls from the two buildings are coplanar then they will be merged into one. Also if the two roofs of the two buildings are coplanar, they will also be merged. (5) In case (c): A and B intersect, then the common part will be cut off from either wall and the two floors will be merged. Obviously, both gluing and merge are applicable to two buildings that should touch each other. But the difference is after gluing the two buildings just become touching each other exactly, whereas merge is based on gluing and furthermore the two buildings will be merged into one building or one composite building. Another important practical uses of merge is that it can merge the buildings that belong to a composite building extracted in different images. It is useful in case that a building is partly occluded in all the images but all the parts from multiple images together can make a complete description of the building Refinement for intersecting buildings When two buildings intersect each other, they should either be two buildings that meet each other or be one building. In latter case, the two building need to be merged into one building. Therefore, there are two ways to refine the intersecting buildings using the building operations. (1) Building clip This process regards the two intersecting buildings as two individual buildings. Taking the two buildings in Figure 5.28(a) for example, the process of refinement using clip can be implemented as below: Name two buildings as A and B (see Figure 5.30 for reference). First, through every edge of roof A make a set of vertical planes. Intersect every vertical plane with roof B will get a set of lines. These lines will form a polygon A (in pink in (a).) Then Intersect A and B and obtain the common part of the two polygons, here it is the polygon bounded by 1, 2, 3 and 4. Subtract this polygon form roof B then obtain a new roof B as shown in (b). Based on the new roof, a new building B can be constructed as shown in (c). And now the building A and B is touching each other exactly. 66

79 A B B B 1 2 A 4 3 Figure 5.30: Building clip process The visualization of the two buildings after building clip is shown in Figure 5.28(b). There are two single buildings that touch each other and now there are no intersecting walls. (2) Building clip and merge (a) (b) (c) This process regards the two intersecting buildings as one composite building. Just as described in Section 3.5.1, by combining the operation clip and merge two intersecting buildings can be refined by two steps: (1) Perform building clip on the two buildings as in Figure (2) Then by performing building operation merge on building A and B, the refined building will look like in Figure 5.28(c). It can be seen in Figure 5.28(c), compared to (b) the common walls between the two buildings are removed and the two floors are joined. Obviously, the difference between them is that after building clip the two single buildings are still single buildings, but after clip and merge the two single buildings are merged into one composite building. Although both the two ways can deal with intersecting buildings, the results are different as can be seen from the examples. In real production, the selection on which way is applicable should be in accordance with the situation and the project specifications Building data storage and export Data storage The MAT-file of MATLAB is used to store the 3D building data. It is a binary disk file with the extension.mat (refer to MATLAB on-line documents for more information). Also, during the process all the intermediate data is also stored using this file format, for example, the working image, cut-off building image etc. Figure 5.31 shows the data structure of the storage of the 3D building data. Here, Building can be either single buildings or composite buildings. According to Section 3.6.1, when decomposing a complex building, a base building with all the buildings on its roof will be regarded as a composite building. And, Roof means a building there because during the data collection the roof is the main feather that can be seen. In order to successfully and completely store all the building data, these data types are used: strings, matrices, multidimensional arrays, structures, and cell arrays. For example, if there is a composite building with a BuildingNumber 5 that consists of 67

80 3 single buildings, then the second face polygon of the third building can be accessed by using the expression: Building{5}.Roof{3}.FacePolygon(2). 3D Building Models BuildingNumber Building (1,2,,BuildingNumber ) RoofNumber GroundElevation Roof (1,2,,RoofNumber ) RoofType EdgePolygon FaceNumber FacePlanes Convexhull ALSpoints2D ALSpoints3D FacePolygon Wall polygon FloorPolygon Figure 5.31: Data storage structure of 3D building Export VRML Since VRML is widely used for describing 3D models and is an open standard, the extracted 3D building models can be exported to VRML format so that they can be used in other system and for more applications. In VRML, the IndexedFaceSet uses a list of coordinates and face descriptions pointing to these coordinates to store and represent the polyhedron type objects, which is most like in our building model. According to the two data structures of the object-oriented building mode in Section 3.62, there are also two corresponding ways to export the data in VRML format: (1) All the vertices of a 3D building will be stored in an array and every building part polygon only stores the indices of the vertices. (2) Every building part polygon directly stores its vertices and therefore every part will be stored and represented separately. The former will save storage space and especially suitable for primitive buildings. But for a complicated building it will need extra process to combine all the vertices in an array and find the relations of the building part polygons and the vertices belong to it. The latter needs more storage spaces and may slow down the display speed, but it does not need to find the relations between the building part polygons and its vertices. 68

81 Figure 5.32 shows the exported VRML format of the 3D building model in Figure 5.19 using the former method. Figure 5.32: 3D buildings in VRML format A prototype production system Along with the implementation of the object-oriented model based 3D building extraction, by grouping the implementation codes and programs, a prototype production system BMC, Building Model Constructor, was developed as shown in Figure The prototype system consists of 3 main function modules: Data preparation, including camera projection matrix calculation and ALS data format conversion. 3D Building Construction, all the interactive roof recognition based processes from define working area to 3D building visualization and export VRML file are realized. Automatic process, this module performs the automatic and semi-automatic roof recognition based processes. Due to the time limitation of the MSc research, the automatic process module and the refinement functions have not completed. However, this prototype production system shows that the proposed method and workflow can be easily realized for practical use, which makes it very promising to make a real production system using the method presented in this thesis for 3D building data extraction towards a commercial system. In the meantime, it also provides a platform or a test bed for further research. Figure 5.34 shows the extracted 3D building models of a small test area using this prototype production system. 69

82 Figure 5.33: A prototype production system example Figure 5.34: 3D building models of a small test area extracted using the prototype system 70

83 6. Analysis, discussion and conclusion So far, an object-oriented building model based method for 3D building extraction by integrating ALS points and aerial imagery has been developed. The workflow for the 3D building extraction has been implemented. This chapter analyzes the extracted 3D building models, discusses the practical aspects, existing problems and the improvements that can be done in the future of the object-oriented model and the model based method; and in the end, makes a summary and concludes this thesis Analysis of extracted 3D building models This section aims to evaluate the 3D building extraction and reconstruction method by analyzing the extracted 3D building models. One limitation of this research is there is no much test data because it is difficult to collect both ALS data and aerial imagery for a relatively wide area. There are only several images and ALS points of two small areas were found (see Appendix 1). Besides, because there are no actual measurements of the buildings and also the field work is not possible, the absolute accuracy of the extracted 3D building models cannot be checked and evaluated. In order to analyze and evaluate the extracted building models, this thesis employs two methods: model fitting evaluation and geometric accuracy evaluation Model fitting evaluation Since the ALS points are in three-dimension and also accurate, a comparison and preliminary evaluation can be done by embedding the extracted 3D building model into the original ALS points to check how the model fits the ALS points. As shown in Figure 6.1, (a) is the original ALS points of the test gable roof building in Figure 5.2(b). (b) is the top view after embedding the 3D building model into the ALS points. (c) is a 3D view of the 3D building model and the ALS points. (a)original ALS points (b) Top view (c) 3D view Figure 6.1: Extracted 3D building model embedded in original ALS points It can be seen from Figure 6.1, although the image of this building was taken by a common digital camera mounted on a helicopter (see Appendix 1) and the camera parameters were calibrated using 71

84 the method in Appendix 3, still it shows a good result. The 3D building model fits the original ALS points very well Geometric accuracy evaluation Furthermore, the geometric accuracy of the extracted 3D building models in Figure 5.34 can be checked in three aspects (refer to Figure 6.2 for reference): Gable Hip Gable (a) Eave and ridge heights Figure 6.2: Top view of the three extracted buildings For ridges of gable and hip roof buildings, normally they should be level, which means the height defence between the two end points of a ridge should be close to zero or less than a small value. Table 6-1 shows the elevation differences of the end points of the 3 ridges (the red lines in Figure 6.2) of the three extracted building models. Table 6-1: Ridge elevation differences of the three building models Building Model Ridge end 5 (m) Ridge end 6 (m) Difference (m) Gable Gable Hip The average ridge elevation difference is 6 cm. For eaves of a building, one normal assumption is that every longer edge of a gable and hip roof should be level, which means the height difference between two end points of one edge should be nearly aero or less than a small value. Table 6-2 shows the elevation differences of the end points of the 6 longer eave edges of the three extracted building models. Table 6-2: Eave elevation differences of the three building models Building Model Eave edge Elev 1 (m) Elev 2 (m) Difference (m) Gable 1 Gable 2 Edge Edge Edge Edge Hip Edge

85 Edge The average eave elevation difference is 16 cm. (b) Dimension accuracy As a gable roof building or a hip roof building, normally the length of two opposite edges should be nearly equal. Table 6-3 shows the length differences of the opposite eave edges of the three building models. Table 6-3: Eave edge length differences of the three building models Building Model Length 1-4 (m) Length 2-3 (m) Difference (m) Gable 1 Gable 2 Hip The average eave edge length difference is 14.7 cm (c) Symmetry Taking the hip roof building for example, normally the two bigger roof faces should have almost the same size. So do the two smaller faces. Table 6-4 shows the areas of the 4 roof faces of the hip roof building after projecting them on a level plane. Table 6-4: Area comparison of the aces of the hip roof Face{ } (m 2 ) Face{ } (m 2 ) Face{1 2 5} (m 2 ) Face{3 4 6} (m 2 ) Generally, the above check results are satisfactory. It can be seen from the above geometric accuracy checks: The three ridges are accurate and reliable. Since there were calculated by intersecting face planes, it shows the calculation of the ridges by intersecting roof faces can obtain good results. Except for Gable 1, the eave elevation difference results of the other two buildings are also accurate. It shows by intersecting view planes through the roof edges with roof faces it can obtain good results. The dimension accuracy and symmetry check are also reasonable, which prove the above statements. It is also worth noting that: In reality, buildings, even for gable and hip roof buildings, could not be always regular and symmetric exactly. Some special cases may exist. The test data are from the Dataset 2 (see Appendix 1). The images were taken using common and non-metric camera without camera parameters. The image quality is low. When calibrating the camera and calculating the projection matrix using the method described in Appendix 3, some uncertain errors may exist. Also, the distortion of the image also needs to be taken into consideration. 73

86 These could be the reasons for some of the big differences, for example the eave elevation differences of Gable 1 and one bigger edge length difference of Gable 2. However, by analyzing and evaluating the extracted building models using the above methods, it gives an impression that generally the extracted 3D building models are in correct shapes and fit the original ALS points very well. It can be inferred that the extraction and reconstruction method for the 3D building model are reliable. But further absolute accuracy check using actual measurements of the buildings and even field work is highly needed Discussion on the object-oriented model based method In order to verify if the research problems defined in Chapter 1 has been solved and the research objective has been achieved, the discussion is divided in two parts according to the research problems. Table 6-5: Comparisons between object-oriented model and the existing building models Items Object-oriented model Existing building models Semantic information Yes No Building relations Defined Not defined Relations of building parts Defined explicitly Represented implicitly by topology for example using B-rep Decomposability Primitive buildings and building primitives Building combination Buildings are mainly decomposed into roofs (polygons), which makes it very flexible and easy to be used for any kind of buildings. There is no limitation. Even a building type is not defined; it can also be represented using existing primitive buildings. Because in the object-oriented building model, every building can be finally decomposed into a set of singleface roof buildings. Provides three building operations for building computation. The building operations are based on roof or wall, namely polygons; and therefore, can be realized and operated more easily. Using CSG, buildings are decomposed into polyhedrons. It is not flexible and not easy to operate, especially for complicated buildings. Some model based methods use building primitives. But if a building type is not defined, it can not be represented by existing primitives. Therefore it will have problems since one can not define all kinds of building primitives. Only CSG has the ability to combine building primitives. But the computation is based on polyhedrons. The implementation is more difficult. Furthermore, extra work needs to be done to transfer the combined model to B-rep. It can not be easily visualized. Data storage Can be directly stored in a DBMS. Need extra process in order to be stored in a DBMS. 74

87 Advantages of the object-oriented building model In this thesis, an object-oriented model for 3D building extraction has been developed. Table 6-5 shows comparisons between the presented object-oriented building model and the existing building models used for building extraction described in Chapter 2. Furthermore, the object-oriented building model can integrate data collection and model construction methods, geometry and topology, semantics and properties as well as data storage and management of 3D buildings into one model. It provides an opportunity to set up a common language or model for 3D buildings both for data extraction and application. Therefore, it can be seen that the presented object-oriented building model has more advantages and therefore more suitable for 3D building extraction Practicality of the presented method Practicality is an important objective of the presented method and workflow for 3D building extraction. It can be discussed and evaluated mainly in four practical aspects as below. (1) Accuracy From the analysis in Section 6.1, a good impression of the accuracy of some extracted building examples has been obtained. Furthermore, from the theoretical point of view the 3D building data extracted using the presented method will keep the accuracy from both aerial imagery and ALS points, which means that the results should have both good horizontal and vertical accuracy. And this needs to be verified by more future accuracy checks. (2) Reliability In the presented method, the reliability of 3D building extraction is improved and achieved by: The workflow contains both automatic procedure and user-assisted procedure. When the automatic process fails, the user-assisted process will fulfil the task. In the user-assisted procedure, the operator can interactively edit the edges and the wrongly detected edges can be corrected. The shape and structure information of the building roof perceived and obtained from the image, for example the roof type, are used in the roof face estimation, which can guarantee that the correct roof faces will be extracted even in a complicated scene. Edges are collected using line and it is not necessary that the whole edge is visible. The corners are obtained by intersecting corresponding edges. Therefore, sometimes even one or more corners are occluded the building can also be extracted accurately and completely. In case that one or more edges are totally occluded, the presented method also provides a solution to extract the building from multiple images correctly by using merge building operations. Based on the three building operations, the building refinement can make the extracted building models more correct and reasonable. (3) Productivity Since so far the photogrammetric method is still the main method for actual 3D building data production, a comparison can be made between the presented method and the conventional 75

88 photogrammetric method. It can be seen that data extraction work is saved and the process is simplified by integrating ALS points and aerial imagery: There is no stereoscopic observation and related special equipments or special software needed. Most of the time, the operation is base on single image. Simple buildings can be extracted automatically or semi-automatically. In user-assisted procedure, most of the time, the operator only needs to click the mouse to select the edges, which needs less effort than collect corners in stereoscopic models. Using building refinement, the intersection points of the edges and new edges and faces can be generated automatically. But in a conventional photogrammetric system, these intersection points would have to be measured interactively, or using some intersection tool of a CAD program. One limitation is that for a project both ALS data and the aerial imagery of the project area need to be acquired. But now airborne data acquisition platforms usually carry both digital aerial cameras and laser scanners, the data acquisition is becoming easier. Due to the lack of more test data, the productivity can not be evaluated by performing a test production of a relatively big area. This will be another future work. (4)Applicability (a) Advantages As a polyhedral model, the presented object-oriented building model can represent arbitrarily shaped buildings with planar faces. Buildings can be decomposed and combined with higher flexibility with the building operations, which makes the presented method be able to deal with very complicated buildings and applicable for almost any kinds of polyhedral buildings. The presented method is developed based on the experiences gained by analyzing the existing techniques and methods and has solved some problems that affect the practical use of combining ALS data and aerial imagery, for example the roof recognition and construction. It can be easily realized for developing a practical production system. The prototype production system in Section 5.11 has given an example. (b) Limitations However, there are also some limitations exist: The presented method can not extract buildings with non polyhedral roofs, for example sphere and dome shape roofs. The applicability of the presented method is also limited by the characteristics and the quality of the two data sources: ALS points and aerial imagery. For example, for approximate vertical roofs they can not be detected using airborne laser scanning and sometimes they also can not be seen in the images. For the smallest building roof that can be modelled using the presented method, it is also depended on the quality and resolution of both two data sources. For oblique photograph, from the only one simple test the presented method was shown also applicable. But there may be problems in some special cases, for example, a roof may be occluded in any image. Also, most of the time the image of a rectangle roof is not a rectangle any 76

89 more. Another problem is that when a roof occludes another roof, both ALS points that belong to the two roofs will be projected in to one roof on the image. In these cases, further solutions need to be worked out in the future. Therefore, for a complete evaluation of the applicability, more experiments and tests of the data extraction for more buildings with different types and on oblique images are highly needed Future improvements Because this research mainly focus on practical implementation of the 3D building extraction, there may be many works can be done to improve the presented method in terms of the automation degree. At least, some improvements both at model level and method level as below can be planned out Model improvement So far, in the object-oriented building model only attributes are used to store the properties and the relations of the building and building parts. But for an object, besides attributes it also may have a set of methods, which are a set of operations of the object representing the behaviour of it. One interesting and promising idea is to make the building operations as a set of methods of the building object. For example, in Figure 6.2, the three building operations are made as three methods of the building model. When running an automatic refinement process, it will call the methods of the related building models. Then the building models will automatically perform the corresponding operations on them and update themselves. This will make the building refinement more automatic and furthermore makes the presented model become a real and complete object-oriented building model. 3D building model + Clip() + Merge() + Gluing() Figure 6.3: An example of object-oriented building model with methods Method improvements For the method, some improvements can be done in these aspects: In order to make a generic method, for ALS data in this research only the coordinate information is used. In the future, if other information, for example, the intensity information, the different pulse data and the colour information, can also be used in the process, the automatic degree will be improved. The presented method only focuses on the extraction of the 3D building data. It does not employ other algorithms and procedures, for example the Digital Terrain Model (DTM) extraction and the ALS points classification. In fact, if for a production in which both DTM and 3D building data are required, the workflow should be modified a bit. The algorithms to filter the ground points and classify the ALS points will be helpful for improving the automatic roof recognition. Now in the presented method, the definition of the working area and selection of the object building require the operator to draw a polygon on the image. But if using ALS data and related 77

90 filter, classification and segmentation algorithms, the roof area of a building can be derived from ALS data automatically. And then, by projecting it onto the image the corresponding building image can also be obtained automatically. Therefore, these two steps can be changed by using a more automatic step. Meanwhile, it is also worth noting that: a) the implementation of these improvements will be based on the facts that the above-mentioned related automatic ALS data processing algorithms or techniques have been mature and achieved satisfied results; b) even these improvements can be implemented in the future, for a practical method the interactive procedure is still necessary because the automatic process cannot guarantee that it can work in any case and at any time, at least in the short future. However, the automation degree of the whole method will definitely be improved Conclusion In this thesis, a new object-oriented model based method for 3D building extraction using ALS points and aerial imagery is developed and presented. From the review of the published methods, this research is the first attempt to develop and use an object-oriented building model for 3D building extraction by integrating ALS points and aerial imagery. The innovative parts are: Developed an object-oriented building model dedicated to 3D building extraction. Developed and realized an object-oriented model based method for 3D building extraction by integrating the object-oriented building model into 3D building extraction process using ALS points and aerial imagery. Proposed and realized three new building operation algorithms for object-oriented building models. Realized three different automation level processes (automatic, semi-automatic and interactive process) for 3D building extraction in one workflow. From the discussion and the research achievements, the following conclusions can be drawn: The object-oriented building model developed has more advantages than the other existing models and modelling techniques and can improve and facilitate the 3D building data extraction, especially for using ALS data and aerial imagery. Using object-oriented building model, it is possible and doable to set up a generic model both for 3D buildings data extraction and application. The presented object-oriented model based 3D building extraction method is practical and promising with the improvability towards a higher automation degree and extended application potential. The integration of an object-oriented building model, ALS data and aerial imagery can resolve some important practical problems in the development of a practical production system for 3D building data extraction using ALS data and aerial imagery; and can result in highly practical and versatile systems. All the research problems and questions have been solved. And consequently, the objective to develop an object-oriented model based method for 3D building extraction by integrating ALS points and aerial imagery is achieved. 78

91 7. References Baltsavias, E. P., Object Extraction and Revision by Image Analysis using Existing Geodata and Knowledge: current status and steps towards operational systems. ISPRS Journal of Photogrammetry and Remote Sensing: Integration of Geodata and Imagery for Automated Refinement and Update of Spatial Databases 58(3-4): Benner, J., Geiger, A., Leinemann, K., Flexible Generation of Semantic 3D Building Models, in: Gr ger/kolbe (Eds.), Proc of the 1st international Workshop on Next Generation 3D City Models, Bonn Brenner, C., Towards Fully Automatic Generation of City Models, in XIX ISPRS Congress, vol. Vol. XXXIII, pp , Amsterdam, The Netherlands. Brenner, C., Building Reconstruction from Images and Laser Scanning, International Journal of Applied Earth Observation and Geoinformation Data Quality in Earth Observation Techniques 6(3-4): Braun C., Kolbe Th., Lang F., Schickler W., Steinhage V., Cremers A. B., Förstner W., Plumer L., On the Models for Photogrammetric Building Reconstruction, Computer & Graphics, Vol. 19, No. 1, pp CityGML website, Delara R. J., Mitishita E. A., Habib A., Bundle Adjustment of Images from Non-metric CCD Camera Using Lidar Data as Control Points, in Proceedings of XXth ISPRS Congress, July 2004 Istanbul, Turkey, Proceedings Volume: IAPRS, Vol.XXXV, part B3, ISSN Elaksher, A., and Bethel, J., Reconstructing 3D Buildings from LIDAR Data, in ISymposium 2002 Photogrammetric Computer Vision, vol. 34, ISPRS, Graz, Austria. Englert, R., Learning Model Knowledge for 3D Building Reconstruction, PhD thesis, Rheinische Friedrich-Wilhelms-Universität Bonn, Institute of Computer Science III, Bonn, Germany. Fischer A., Kolbe Th., Lang L., On the Use of Geometric and Semantic Models for Component- Based Building Reconstruction in: Semantic Modeling for the Acquisition of Topographic Information from Images and Maps SMATI '99, Ed.: W. Förstner, C.-E. Liedtke, J. Bückner, Institut für Photogrammetrie, Universität Bonn, pages Flamanc D., Maillet, G., Jibrini H., D City Models: An Operational Approach Using Aerial Images and Cadastral Maps. ISPRS Archives Vol. XXXIV, Part 3/W8. Gülch E., Müller H., Hahn M., Semi-automatic Object Extraction - Lessons Learned, Commission 2 XXth ISPRS Congress, July 2004 Istanbul, Turkey 79

92 Habib A.F., Ghanma M.S., Morgan M.F., Mitishita E., Integration of Laser and Photogrammetric Data for Calibration Purposes, in Proceedings of XXth ISPRS Congress, July 2004 Istanbul, Turkey, Proceedings Volume: IAPRS, Vol.XXXV, part B2, p. 170 ff, ISSN Haithcoat, T., Song W., Hipple, J., Automated Building Extraction and Reconstruction from LIDAR Data, from Buildings/Building%20Extraction.pdf. Heuel S., Uncertain Projective Geometry: Statistical Reasoning for Polyhedral Object Reconstruction (Lecture Notes in Computer Science). Springer; 1 edition (June 14, 2004), 205 p, ISSN , ISBN Hongjian, Y. and Shiqiang Z., D Building Reconstruction from Aerial CCD Image and Sparse Laser Sample Data. Optics and Lasers in Engineering 44(6): Huber, M., Schickler, W., Hinz, S., Baumgartner, A., Fusion of LIDAR Data and Aerial Imagery for Automatic Reconstruction of Building Surfaces, May nd Joint Workshop on Remote Sensing and Data Fusion over Urban Areas, Berlin, Germany. Jinhui, H., You S., Neumann, U., Park, K. K., Building Modeling from LIDAR and Aerial Imagery, from ASPRS 2004 Kaartinen, H., Hyyppa, J., Gülch, E., Vosselman G., et al., EuroSDR Building Extraction Comparison, in: Gr ger/kolbe (Eds.), Proc of the 1st international Workshop on Next Generation 3D City Models, Bonn Kovesi, P. D., MATLAB and Octave Functions for Computer Vision and Image Processing. School of Computer Science & Software Engineering, The University of Western Australia, Available from: Maas, H.-G., and Vosselman G., Two Algorithms for Extracting Building Models from Raw Laser Altimetry Data, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 54, Madhavan, B. B., Wang, C., Tanahashi, H., Hirayu, H., Niwa, Y., Yamamoto, K., Tachibana, K., Sasagawa, T., A Computer Vision Based Approach for 3D Building Modelling of Airborne Laser Scanner DSM Data. Computers, Environment and Urban Systems 30(1): Rottensteiner, F., Semi-Automatic Extraction 0f Buildings Based on Hybrid Adjustment Using 3D Surface Models and Management of Building Data in a TIS, PhD thesis, Vienna University of Technology, Austria, Vienna. Rottensteiner, F., and Briese C., A New Method for Building Extraction in Urban Areas from High-Resolution LIDAR Data, in Symposium 2002 Photogrammetric Computer Vision, vol. vol. 34,, pp , ISPRS, Graz, Austria. Schenk, T. and Csatho B., Fusion of LIDAR Data and Aerial Imagery for a More Complete Surface Description. Photogrammetric Computer Vision ISPRS Commission III, Symposium 2002,September 9-13, 2002, Graz, Austria. 80

93 Schwalbe, E., Maas H.-G., and Seidel F., D Building Model Generation from Airborne Laser Scanner Data Using 2D GIS Data and Orthogonal Point Cloud Projections, in Workshop "Laser scanning 2005", vol. XXXVI, ISPRS, Enschede, The Netherlands. Stoter, J. E., D Cadastre. Delft, Nederlandse Commissie voor Geodesie (NCG), Netherlands Geodetic Commission NCG: Publications on Geodesy: New Series 57, 327 p. ISBN: SUGR Website, Suveg, I. and Vosselman G., Reconstruction of 3D Building Models from Aerial Images and Maps. ISPRS Journal of Photogrammetry and Remote Sensing Integration of Geodata and Imagery for Automated Refinement and Update of Spatial Databases 58(3-4): Vosselman, G., Building Reconstruction Using Planar Faces in Very High Density Height Data, in International Archives of Photogrammetry and Remote Sensing, vol. vol. 32, pp , Munich, Germany. Vosselman, G., Fusion of Laser Scanning Data, Maps and Aerial Photographs for Building Reconstruction. IEEE International Geoscience and Remote Sensing Symposium and the 24th Canadian Symposium on Remote Sensing, IGARSS'02, Toronto, Canada. Vosselman, G., and Dijkman S., D Building Model Reconstruction from Point Clouds and Ground Plans, in ISPRS Workshop LAND SURFACE MAPPING AND CHARACTERIZATION USING LASER ALTIMETRY, vol. XXXIV, Annapolis, USA. Zhou, G., Song C., Simmers, J., Cheng, P., Urban 3D GIS From LiDAR and digital aerial images. Computers & Geosciences Multidimensional geospatial technology for the geosciences 30(4): Zlatanova, S., Paintsil J., Tempfli, K., D Object Reconstruction from Aerial Stereo Images, in: WSCG '98 : the 6th international conference in Central Europe on computer graphics and visualization '98 : February 9-13, 1998, Plzen-Bory, Czech Republic, 7 p. 81

94

95 Appendices Appendix 1: Test data description There are two kinds of datasets including aerial images and corresponding ALS data respectively were collected as the data source for this research as listed in the Table 1. Among them, images of Dataset1 are scanned conventional aerial image; images of Dataset2 were taken by a common digital camera with 1 Mega pixels mounted on a helicopter. The problem of Dataset1 is there are only planar roof buildings in the data area but the camera parameters are known. The problems of Dataset2 are all the camera parameters are unknown and the ALS point data is in LAS format; and there is no an existing software in the Institute be able to convert LAS files to ASCII files. Therefore, in order to provide more test data for the research, a method to acquire camera projection matrix for unknown cameras based on ImageModeler was developed and presented in Appendix 3. Meanwhile, a program for converting LAS file to ASCII file was also developed (see Appendix 4). The method and the program could be useful for other researchers. Table 1: Test data description Aerial images Dataset1 Number of photos 2 3 Flying height, scale 860 m, 1:5300 Pixel size Camera parameters(including interior and exterior parameters) 14 microns Known Dataset2 Unknown ALS points Flight altitude 400 m Pulse frequency Hz Field of View ± 7.15 degrees Measurement density per m per m 2 Swath width 100 m Mode First pulse First pulse For the four buildings selected for the implementation test in Chapter 5, the two buildings in Figure 5.2(a) and 5.14 come from Dataset1 and the buildings in Figure 5.2(b) and Figure 5.12 are from Dataset2. 83

96 Appendix 2: Calculation of projection matrix and Matlab codes For a projective camera or the pinhole camera, a 3D point X in the object space coordinate system and its corresponding projection 2D points x in the image coordinate system can be expressed as equation (1), where P is the projection matrix of the aerial image and is a 3 4 matrix; X and x is the homogenous representation of the 3D point and its projected 2D point on the image. x=px (1) With the projection matrix P, a view plane through a 2D line on the image can be calculated by equation (2), where l is the homogenous representation of 2D line and A is the homogenous representation of the 3D view plane and is a 4 1 matrix. Here A=[A,B,C,D] T,, A, B, C and D cab be deemed as the coefficients of the view plane equation. l=[a b c] T is the homogenous representation of 2D image line. A=P T l (2) If the camera parameters, including both interior and exterior parameters, are known then the projection matrix can be calculated directly using equation (3), where K is the calibration matrix created using the interior parameters, R is the rotation matrix and X 0 is the homogenous representation of the 3D coordinates of the projective center. P = K R [ I - X 0 ] (3) The implementation method of the projection matrix using MATLAB is as below: %get calibration matrix by inputting the focal length and the coordinates of the principle point C=get_CalibrationMatrix(-f,0,0,x0,y0); %get calibration matrix using the 3 rotation angles around X-,Y- and Z- axis respectively R=get_RotationMatrix(Omega, Phi, Kappa); %get [ I -X 0 ] I=get_IdentityWithCameraCentre(X,Y,Z); % If the coordinates of interior parameters are in millimeter and the measurements in image are in %pixel, an affine transform matrix is needed to make a transformation from photo coordinate system %to image coordinate system, which can be obtained by interior orientation. Here, A is the matrix of % If the coordinates of interior parameters are in millimeter A=[a0 a1 a2;b0 b1 b2; 0 0 1]; Ainv=inv(A); P = Ainv * C*R*I; else % If the coordinates of interior parameters are already in pixel P=C*R*I; end; % % Function to calculate rotation matrix % rotation order is omega, phi, then kappa 84

97 % function ret = get_rotationmatrix (omega, phi,kappa) % %change degree to radian omega=omega*pi/180; phi=phi*pi/180; kappa=kappa*pi/180; %construct rotation matrix around X-,Y- and Z- axis separately Rx=[1 0 0;0 cos(omega) sin(omega);0 -sin(omega) cos(omega)]; Ry=[cos(phi) 0 -sin(phi);0 1 0;sin(phi) 0 cos(phi)]; Rz=[cos(kappa) sin(kappa) 0; -sin(kappa) cos(kappa) 0; 0 0 1]; %calculate the rotation matrix ret=rz*ry*rx; % % Function to calculate rotation matrix % function ret = get_calibrationmatrix (c,s,m,x0,y0) % ret = [ c, s, x0; 0 c*(1+m),y0 ; 0,0,1 ]; % % Function to construct [ I -X 0 ] % function ret = get_identitywithcameracentre (X0,Y0,Z0) % ret = [ 1,0,0, -X0; 0,1,0,-Y0; 0,0,1,-Z0]; 85

98 Appendix 3 : A method to acquire camera projection matrix based on ImageModeler ImageModeler is a software product of RealVIZ. It is an application designed to create 3D models. For more information, refer to ImageModeler homepage: and other on-line help documents. One useful function of ImageModeler is the camera calibration, which defines the 3D space for the model from the 2D images and establishes the parameters of the camera(s) that were used to capture the photographs. The working principle is: by identifying a certain large number of the same features, normally points, in different images, the camera can be calibrated. Therefore, the camera parameters (position, focal length, and so on) used to shoot the images and 3D positions and orientation of the camera can be retrieved. Based on this function, a method using ImageModeler to acquire the camera projection matrix was developed in this research. Using it, the camera projection matrices of the four images in Dataset2 in appendix 1 were calculated out. The workflow consists of 4 steps is described as below: Step1: Using ImageModeler to obtain the camera parameters at model level (More details about how to calibrate camera(s) can be found in the Calibrating Cameras section of the on-line ImageModeler help document) Figure 1: Identify same points in the four images and calibrate the camera In Imagemodeler, on the four images identify enough same features. In this case, 19 points were selected and identified. Then calibrated the camera and improved the results using the methods 86

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION Ruijin Ma Department Of Civil Engineering Technology SUNY-Alfred Alfred, NY 14802 mar@alfredstate.edu ABSTRACT Building model reconstruction has been

More information

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA Changjae Kim a, Ayman Habib a, *, Yu-Chuan Chang a a Geomatics Engineering, University of Calgary, Canada - habib@geomatics.ucalgary.ca,

More information

Unwrapping of Urban Surface Models

Unwrapping of Urban Surface Models Unwrapping of Urban Surface Models Generation of virtual city models using laser altimetry and 2D GIS Abstract In this paper we present an approach for the geometric reconstruction of urban areas. It is

More information

Cell Decomposition for Building Model Generation at Different Scales

Cell Decomposition for Building Model Generation at Different Scales Cell Decomposition for Building Model Generation at Different Scales Norbert Haala, Susanne Becker, Martin Kada Institute for Photogrammetry Universität Stuttgart Germany forename.lastname@ifp.uni-stuttgart.de

More information

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A. Mahphood, H. Arefi *, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran,

More information

Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method

Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method Jiann-Yeou RAU, Liang-Chien CHEN Tel: 886-3-4227151 Ext. 7651,7627,7622 Fax: 886-3-4255535 {jyrau, lcchen} @csrsr.ncu.edu.tw

More information

FOOTPRINTS EXTRACTION

FOOTPRINTS EXTRACTION Building Footprints Extraction of Dense Residential Areas from LiDAR data KyoHyouk Kim and Jie Shan Purdue University School of Civil Engineering 550 Stadium Mall Drive West Lafayette, IN 47907, USA {kim458,

More information

AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING

AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING Shi Pu and George Vosselman International Institute for Geo-information Science and Earth Observation (ITC) spu@itc.nl, vosselman@itc.nl

More information

AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS

AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS Jihye Park a, Impyeong Lee a, *, Yunsoo Choi a, Young Jin Lee b a Dept. of Geoinformatics, The University of Seoul, 90

More information

NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN

NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN OVERVIEW National point clouds Airborne laser scanning in the Netherlands Quality control Developments in lidar

More information

3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS

3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS 3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS Ellen Schwalbe Institute of Photogrammetry and Remote Sensing Dresden University

More information

AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING

AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING Yuxiang He*, Chunsun Zhang, Mohammad Awrangjeb, Clive S. Fraser Cooperative Research Centre for Spatial Information,

More information

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING Shi Pu International Institute for Geo-information Science and Earth Observation (ITC), Hengelosestraat 99, P.O. Box 6, 7500 AA Enschede, The

More information

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA C. K. Wang a,, P.H. Hsu a, * a Dept. of Geomatics, National Cheng Kung University, No.1, University Road, Tainan 701, Taiwan. China-

More information

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS Y. Postolov, A. Krupnik, K. McIntosh Department of Civil Engineering, Technion Israel Institute of Technology, Haifa,

More information

Automatic Building Extrusion from a TIN model Using LiDAR and Ordnance Survey Landline Data

Automatic Building Extrusion from a TIN model Using LiDAR and Ordnance Survey Landline Data Automatic Building Extrusion from a TIN model Using LiDAR and Ordnance Survey Landline Data Rebecca O.C. Tse, Maciej Dakowicz, Christopher Gold and Dave Kidner University of Glamorgan, Treforest, Mid Glamorgan,

More information

BUILDING ROOF RECONSTRUCTION BY FUSING LASER RANGE DATA AND AERIAL IMAGES

BUILDING ROOF RECONSTRUCTION BY FUSING LASER RANGE DATA AND AERIAL IMAGES BUILDING ROOF RECONSTRUCTION BY FUSING LASER RANGE DATA AND AERIAL IMAGES J.J. Jaw *,C.C. Cheng Department of Civil Engineering, National Taiwan University, 1, Roosevelt Rd., Sec. 4, Taipei 10617, Taiwan,

More information

International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998

International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 RAPID ACQUISITION OF VIRTUAL REALITY CITY MODELS FROM MULTIPLE DATA SOURCES Claus Brenner and Norbert Haala

More information

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA Abdullatif Alharthy, James Bethel School of Civil Engineering, Purdue University, 1284 Civil Engineering Building, West Lafayette, IN 47907

More information

THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA

THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA Sander Oude Elberink* and Hans-Gerd Maas** *Faculty of Civil Engineering and Geosciences Department of

More information

Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction

Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction Andrew McClune, Pauline Miller, Jon Mills Newcastle University David Holland Ordnance Survey Background

More information

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Pankaj Kumar 1*, Alias Abdul Rahman 1 and Gurcan Buyuksalih 2 ¹Department of Geoinformation Universiti

More information

FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS

FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS Claus Brenner and Norbert Haala Institute for Photogrammetry (ifp) University of Stuttgart Geschwister-Scholl-Straße 24, 70174 Stuttgart, Germany Ph.: +49-711-121-4097,

More information

Advanced point cloud processing

Advanced point cloud processing Advanced point cloud processing George Vosselman ITC Enschede, the Netherlands INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Laser scanning platforms Airborne systems mounted

More information

Lidar and GIS: Applications and Examples. Dan Hedges Clayton Crawford

Lidar and GIS: Applications and Examples. Dan Hedges Clayton Crawford Lidar and GIS: Applications and Examples Dan Hedges Clayton Crawford Outline Data structures, tools, and workflows Assessing lidar point coverage and sample density Creating raster DEMs and DSMs Data area

More information

CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES

CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES Norbert Haala, Susanne Becker, Martin Kada Institute for Photogrammetry, Universitaet Stuttgart Geschwister-Scholl-Str. 24D,

More information

FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES UMA SHANKAR PANDAY March, 2011 SUPERVISORS: Dr. M. (Markus) Gerke Prof. Dr. M. G. (George) Vosselman FITTING OF PARAMETRIC BUILDING MODELS

More information

City-Modeling. Detecting and Reconstructing Buildings from Aerial Images and LIDAR Data

City-Modeling. Detecting and Reconstructing Buildings from Aerial Images and LIDAR Data City-Modeling Detecting and Reconstructing Buildings from Aerial Images and LIDAR Data Department of Photogrammetrie Institute for Geodesy and Geoinformation Bonn 300000 inhabitants At river Rhine University

More information

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry Submitted to GIM International FEATURE A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry Dieter Fritsch 1, Jens Kremer 2, Albrecht Grimm 2, Mathias Rothermel 1

More information

WAVELET AND SCALE-SPACE THEORY IN SEGMENTATION OF AIRBORNE LASER SCANNER DATA

WAVELET AND SCALE-SPACE THEORY IN SEGMENTATION OF AIRBORNE LASER SCANNER DATA WAVELET AND SCALE-SPACE THEORY IN SEGMENTATION OF AIRBORNE LASER SCANNER DATA T.Thuy VU, Mitsuharu TOKUNAGA Space Technology Applications and Research Asian Institute of Technology P.O. Box 4 Klong Luang,

More information

3D Topography acquisition Literature study and PhD proposal

3D Topography acquisition Literature study and PhD proposal 3D Topography acquisition Literature study and PhD proposal Sander Oude Elberink December 2005 RGI 3D Topo DP 1-4 Status: definitive i Table of contents 1. Introduction...1 1.1. Background...1 1.2. Goal...1

More information

Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgar

Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgar Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgart Geschwister-Scholl-Strae 24, 70174 Stuttgart, Germany

More information

BUILDING EXTRACTION AND RECONSTRUCTION FROM LIDAR DATA. Zheng Wang. EarthData International Gaithersburg, Maryland USA

BUILDING EXTRACTION AND RECONSTRUCTION FROM LIDAR DATA. Zheng Wang. EarthData International Gaithersburg, Maryland USA BUILDING EXTRACTION AND RECONSTRUCTION FROM LIDAR DATA Zheng Wang EarthData International Gaithersburg, Maryland USA zwang@earthdata.com Tony Schenk Department of Civil Engineering The Ohio State University

More information

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES AUTOMATIC EXTRACTING DEM FROM DSM WITH CONSECUTIVE MORPHOLOGICAL FILTERING Junhee Youn *1 & Tae-Hoon Kim 2 *1,2 Korea Institute of Civil Engineering

More information

Automatic DTM Extraction from Dense Raw LIDAR Data in Urban Areas

Automatic DTM Extraction from Dense Raw LIDAR Data in Urban Areas Automatic DTM Extraction from Dense Raw LIDAR Data in Urban Areas Nizar ABO AKEL, Ofer ZILBERSTEIN and Yerach DOYTSHER, Israel Key words: LIDAR, DSM, urban areas, DTM extraction. SUMMARY Although LIDAR

More information

Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz Universität Hannover

Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz Universität Hannover 12th AGILE International Conference on Geographic Information Science 2009 page 1 of 5 Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz

More information

AUTOMATIC BUILDING DETECTION FROM LIDAR POINT CLOUD DATA

AUTOMATIC BUILDING DETECTION FROM LIDAR POINT CLOUD DATA AUTOMATIC BUILDING DETECTION FROM LIDAR POINT CLOUD DATA Nima Ekhtari, M.R. Sahebi, M.J. Valadan Zoej, A. Mohammadzadeh Faculty of Geodesy & Geomatics Engineering, K. N. Toosi University of Technology,

More information

Data Representation in Visualisation

Data Representation in Visualisation Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have

More information

GIS Data Collection. This chapter reviews the main methods of GIS data capture and transfer and introduces key practical management issues.

GIS Data Collection. This chapter reviews the main methods of GIS data capture and transfer and introduces key practical management issues. 9 GIS Data Collection OVERVIEW This chapter reviews the main methods of GIS data capture and transfer and introduces key practical management issues. It distinguishes between primary (direct measurement)

More information

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal

More information

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION Ruonan Li 1, Tianyi Zhang 1, Ruozheng Geng 1, Leiguang Wang 2, * 1 School of Forestry, Southwest Forestry

More information

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Welcome to the lectures on computer graphics. We have

More information

AUTOMATIC GENERATION OF 3-D BUILDING MODELS FROM BUILDING POLYGONS ON GIS

AUTOMATIC GENERATION OF 3-D BUILDING MODELS FROM BUILDING POLYGONS ON GIS AUTOMATIC GENERATION OF 3-D BUILDING MODELS FROM BUILDING POLYGONS ON GIS Kenichi Sugihara 1, Yoshitugu Hayashi 2 ABSTRACT When a real urban world is projected into 3-D virtual space, buildings are major

More information

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Evangelos MALTEZOS, Charalabos IOANNIDIS, Anastasios DOULAMIS and Nikolaos DOULAMIS Laboratory of Photogrammetry, School of Rural

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

Polyhedral Building Model from Airborne Laser Scanning Data**

Polyhedral Building Model from Airborne Laser Scanning Data** GEOMATICS AND ENVIRONMENTAL ENGINEERING Volume 4 Number 4 2010 Natalia Borowiec* Polyhedral Building Model from Airborne Laser Scanning Data** 1. Introduction Lidar, also known as laser scanning, is a

More information

GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES

GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES Norbert Haala, Martin Kada, Susanne Becker, Jan Böhm, Yahya Alshawabkeh University of Stuttgart, Institute for Photogrammetry, Germany Forename.Lastname@ifp.uni-stuttgart.de

More information

Experiments on Generation of 3D Virtual Geographic Environment Based on Laser Scanning Technique

Experiments on Generation of 3D Virtual Geographic Environment Based on Laser Scanning Technique Experiments on Generation of 3D Virtual Geographic Environment Based on Laser Scanning Technique Jie Du 1, Fumio Yamazaki 2 Xiaoyong Chen 3 Apisit Eiumnoh 4, Michiro Kusanagi 3, R.P. Shrestha 4 1 School

More information

INTEGRATION OF AUTOMATIC PROCESSES INTO SEMI-AUTOMATIC BUILDING EXTRACTION

INTEGRATION OF AUTOMATIC PROCESSES INTO SEMI-AUTOMATIC BUILDING EXTRACTION INTEGRATION OF AUTOMATIC PROCESSES INTO SEMI-AUTOMATIC BUILDING EXTRACTION Eberhard Gülch, Hardo Müller, Thomas Läbe Institute of Photogrammetry University Bonn Nussallee 15, D-53115 Bonn, Germany Ph.:

More information

AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES

AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES Yandong Wang Pictometry International Corp. Suite A, 100 Town Centre Dr., Rochester, NY14623, the United States yandong.wang@pictometry.com

More information

Methods for Automatically Modeling and Representing As-built Building Information Models

Methods for Automatically Modeling and Representing As-built Building Information Models NSF GRANT # CMMI-0856558 NSF PROGRAM NAME: Automating the Creation of As-built Building Information Models Methods for Automatically Modeling and Representing As-built Building Information Models Daniel

More information

3D BUILDINGS MODELLING BASED ON A COMBINATION OF TECHNIQUES AND METHODOLOGIES

3D BUILDINGS MODELLING BASED ON A COMBINATION OF TECHNIQUES AND METHODOLOGIES 3D BUILDINGS MODELLING BASED ON A COMBINATION OF TECHNIQUES AND METHODOLOGIES Georgeta Pop (Manea), Alexander Bucksch, Ben Gorte Delft Technical University, Department of Earth Observation and Space Systems,

More information

AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY

AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY T. Partovi a *, H. Arefi a,b, T. Krauß a, P. Reinartz a a German Aerospace Center (DLR), Remote Sensing Technology Institute,

More information

CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES

CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES Alaeldin Suliman, Yun Zhang, Raid Al-Tahir Department of Geodesy and Geomatics Engineering, University

More information

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission

More information

VALLIAMMAI ENGINEERING COLLEGE

VALLIAMMAI ENGINEERING COLLEGE VALLIAMMAI ENGINEERING COLLEGE SRM Nagar, Kattankulathur 603 203 DEPARTMENT OF MECHANICAL ENGINEERING QUESTION BANK M.E: CAD/CAM I SEMESTER ED5151 COMPUTER APPLICATIONS IN DESIGN Regulation 2017 Academic

More information

TABLE OF CONTENTS. Worksheets Lesson 1 Worksheet Introduction to Geometry 41 Lesson 2 Worksheet Naming Plane and Solid Shapes.. 44

TABLE OF CONTENTS. Worksheets Lesson 1 Worksheet Introduction to Geometry 41 Lesson 2 Worksheet Naming Plane and Solid Shapes.. 44 Acknowledgement: A+ TutorSoft would like to thank all the individuals who helped research, write, develop, edit, and launch our MATH Curriculum products. Countless weeks, years, and months have been devoted

More information

Class #2. Data Models: maps as models of reality, geographical and attribute measurement & vector and raster (and other) data structures

Class #2. Data Models: maps as models of reality, geographical and attribute measurement & vector and raster (and other) data structures Class #2 Data Models: maps as models of reality, geographical and attribute measurement & vector and raster (and other) data structures Role of a Data Model Levels of Data Model Abstraction GIS as Digital

More information

A DBMS-BASED 3D TOPOLOGY MODEL FOR LASER RADAR SIMULATION

A DBMS-BASED 3D TOPOLOGY MODEL FOR LASER RADAR SIMULATION A DBMS-BASED 3D TOPOLOGY MODEL FOR LASER RADAR SIMULATION C. Jun a, * G. Kim a a Dept. of Geoinformatics, University of Seoul, Seoul, Korea - (cmjun, nani0809)@uos.ac.kr Commission VII KEY WORDS: Modelling,

More information

SEGMENTATION OF TIN-STRUCTURED SURFACE MODELS

SEGMENTATION OF TIN-STRUCTURED SURFACE MODELS ISPRS SIPT IGU UCI CIG ACSG Table of contents Table des matières Authors index Index des auteurs Search Recherches Exit Sortir SEGMENTATION OF TIN-STRUCTURED SURFACE MODELS Ben Gorte Technical University

More information

BUILDING RECONSTRUCTION USING LIDAR DATA

BUILDING RECONSTRUCTION USING LIDAR DATA BUILDING RECONSTRUCTION USING LIDAR DATA R. O.C. Tse, M. Dakowicz, C.M. Gold, and D.B. Kidner GIS Research Centre, School of Computing, University of Glamorgan, Pontypridd, CF37 1DL, Wales, UK. rtse@glam.ac.uk,mdakowic@glam.ac.uk,cmgold@glam.ac.uk,

More information

APPENDIX E2. Vernal Pool Watershed Mapping

APPENDIX E2. Vernal Pool Watershed Mapping APPENDIX E2 Vernal Pool Watershed Mapping MEMORANDUM To: U.S. Fish and Wildlife Service From: Tyler Friesen, Dudek Subject: SSHCP Vernal Pool Watershed Analysis Using LIDAR Data Date: February 6, 2014

More information

Permanent Structure Detection in Cluttered Point Clouds from Indoor Mobile Laser Scanners (IMLS)

Permanent Structure Detection in Cluttered Point Clouds from Indoor Mobile Laser Scanners (IMLS) Permanent Structure Detection in Cluttered Point Clouds from NCG Symposium October 2016 Promoter: Prof. Dr. Ir. George Vosselman Supervisor: Michael Peter Problem and Motivation: Permanent structure reconstruction,

More information

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the

More information

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Geometric Modeling Mortenson Chapter 11. Complex Model Construction

Geometric Modeling Mortenson Chapter 11. Complex Model Construction Geometric Modeling 91.580.201 Mortenson Chapter 11 Complex Model Construction Topics Topology of Models Connectivity and other intrinsic properties Graph-Based Models Emphasize topological structure Boolean

More information

Creating and Maintaining Your 3D Basemap. Brian Sims Dan Hedges Gert van Maren

Creating and Maintaining Your 3D Basemap. Brian Sims Dan Hedges Gert van Maren Creating and Maintaining Your 3D Basemap Brian Sims Dan Hedges Gert van Maren Complementary Resource Email (no marketing) A copy of the presentation Links to today s web demos Links to training materials

More information

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA HUANG Xianfeng State Key Laboratory of Informaiton Engineering in Surveying, Mapping and Remote Sensing (Wuhan University), 129 Luoyu

More information

AUTOMATIC RECONSTRUCTION OF BUILDING ROOFS THROUGH EFFECTIVE INTEGRATION OF LIDAR AND MULTISPECTRAL IMAGERY

AUTOMATIC RECONSTRUCTION OF BUILDING ROOFS THROUGH EFFECTIVE INTEGRATION OF LIDAR AND MULTISPECTRAL IMAGERY AUTOMATIC RECONSTRUCTION OF BUILDING ROOFS THROUGH EFFECTIVE INTEGRATION OF LIDAR AND MULTISPECTRAL IMAGERY Mohammad Awrangjeb, Chunsun Zhang and Clive S. Fraser Cooperative Research Centre for Spatial

More information

Automatic generation of 3-d building models from multiple bounded polygons

Automatic generation of 3-d building models from multiple bounded polygons icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) Automatic generation of 3-d building models from multiple

More information

ACCURATE BUILDING OUTLINES FROM ALS DATA

ACCURATE BUILDING OUTLINES FROM ALS DATA ACCURATE BUILDING OUTLINES FROM ALS DATA Clode S.P. a, Kootsookos P.J. a, Rottensteiner F. b a The Intelligent Real-Time Imaging and Sensing Group School of Information Technology & Electrical Engineering

More information

Building Segmentation and Regularization from Raw Lidar Data INTRODUCTION

Building Segmentation and Regularization from Raw Lidar Data INTRODUCTION Building Segmentation and Regularization from Raw Lidar Data Aparajithan Sampath Jie Shan Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive West Lafayette, IN 47907-2051

More information

Visual Information Solutions. E3De. The interactive software environment for extracting 3D information from LiDAR data.

Visual Information Solutions. E3De. The interactive software environment for extracting 3D information from LiDAR data. Visual Information Solutions E3De. The interactive software environment for extracting 3D information from LiDAR data. Photorealistic Visualizations. 3D Feature Extraction. Versatile Geospatial Products.

More information

Geometric Modeling. Introduction

Geometric Modeling. Introduction Geometric Modeling Introduction Geometric modeling is as important to CAD as governing equilibrium equations to classical engineering fields as mechanics and thermal fluids. intelligent decision on the

More information

MODERN DESCRIPTIVE GEOMETRY SUPPORTED BY 3D COMPUTER MODELLING

MODERN DESCRIPTIVE GEOMETRY SUPPORTED BY 3D COMPUTER MODELLING International Conference on Mathematics Textbook Research and Development 2014 () 29-31 July 2014, University of Southampton, UK MODERN DESCRIPTIVE GEOMETRY SUPPORTED BY 3D COMPUTER MODELLING Petra Surynková

More information

Using Databases for 3D Data Management From Point Cloud to City Model

Using Databases for 3D Data Management From Point Cloud to City Model Using Databases for 3D Data Management From Point Cloud to City Model Hans Viehmann 1 1 ORACLE Corporation, Server Technologies Division, Hamburg, Germany, hans.viehmann@oracle.com Abstract With the cost

More information

Keywords: 3D-GIS, R-Tree, Progressive Data Transfer.

Keywords: 3D-GIS, R-Tree, Progressive Data Transfer. 3D Cadastres 3D Data Model Visualisation 3D-GIS IN NETWORKING ENVIRONMENTS VOLKER COORS Fraunhofer Institute for Computer Graphics Germany ABSTRACT In this paper, we present a data model for 3D geometry

More information

INTEGRATED METHOD OF BUILDING EXTRACTION FROM DIGITAL SURFACE MODEL AND IMAGERY

INTEGRATED METHOD OF BUILDING EXTRACTION FROM DIGITAL SURFACE MODEL AND IMAGERY INTEGRATED METHOD OF BUILDING EXTRACTION FROM DIGITAL SURFACE MODEL AND IMAGERY Yan Li 1, *, Lin Zhu, Hideki Shimamura, 1 International Institute for Earth System Science, Nanjing University, Nanjing,

More information

COMBINING HIGH RESOLUTION SATELLITE IMAGERY AND AIRBORNE LASER SCANNING DATA FOR GENERATING BARELAND DEM IN URBAN AREAS

COMBINING HIGH RESOLUTION SATELLITE IMAGERY AND AIRBORNE LASER SCANNING DATA FOR GENERATING BARELAND DEM IN URBAN AREAS COMBINING HIGH RESOLUTION SATELLITE IMAGERY AND AIRBORNE LASER SCANNING DATA FOR GENERATING BARELAND IN URBAN AREAS Guo Tao *, Yoshifumi Yasuoka Institute of Industrial Science, University of Tokyo, 4-6-1

More information

TOPOSCOPY, A CLOSE RANGE PHOTOGRAMMETRIC SYSTEM FOR ARCHITECTS AND LANDSCAPE DESIGNERS

TOPOSCOPY, A CLOSE RANGE PHOTOGRAMMETRIC SYSTEM FOR ARCHITECTS AND LANDSCAPE DESIGNERS TOPOSCOPY, A CLOSE RANGE PHOTOGRAMMETRIC SYSTEM FOR ARCHITECTS AND LANDSCAPE DESIGNERS A. C. Groneman-van der Hoeven Bureau Toposcopie, Bachlaan 78, 6865 ES Doorwerth, The Netherlands. info@toposcopie.nl

More information

A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO

A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO Yan Li a, Tadashi Sasagawa b, Peng Gong a,c a International Institute for Earth System Science, Nanjing University,

More information

2011 Bentley Systems, Incorporated. Bentley Descartes V8i Advancing Information Modeling For Intelligent Infrastructure

2011 Bentley Systems, Incorporated. Bentley Descartes V8i Advancing Information Modeling For Intelligent Infrastructure Bentley Descartes V8i Advancing Information Modeling For Intelligent Infrastructure Agenda Why would you need Bentley Descartes? What is Bentley Descartes? Advanced Point Cloud Workflows Advanced Terrain

More information

EVOLUTION OF POINT CLOUD

EVOLUTION OF POINT CLOUD Figure 1: Left and right images of a stereo pair and the disparity map (right) showing the differences of each pixel in the right and left image. (source: https://stackoverflow.com/questions/17607312/difference-between-disparity-map-and-disparity-image-in-stereo-matching)

More information

COMPUTING SOLAR ENERGY POTENTIAL OF URBAN AREAS USING AIRBORNE LIDAR AND ORTHOIMAGERY

COMPUTING SOLAR ENERGY POTENTIAL OF URBAN AREAS USING AIRBORNE LIDAR AND ORTHOIMAGERY COMPUTING SOLAR ENERGY POTENTIAL OF URBAN AREAS USING AIRBORNE LIDAR AND ORTHOIMAGERY Ryan Hippenstiel The Pennsylvania State University John A. Dutton e-education Institute 2217 Earth & Engineering Sciences

More information

MODELLING 3D OBJECTS USING WEAK CSG PRIMITIVES

MODELLING 3D OBJECTS USING WEAK CSG PRIMITIVES MODELLING 3D OBJECTS USING WEAK CSG PRIMITIVES Claus Brenner Institute of Cartography and Geoinformatics, University of Hannover, Germany claus.brenner@ikg.uni-hannover.de KEY WORDS: LIDAR, Urban, Extraction,

More information

Urban Site Modeling From LiDAR

Urban Site Modeling From LiDAR Urban Site Modeling From LiDAR Suya You, Jinhui Hu, Ulrich Neumann, and Pamela Fox Integrated Media Systems Center Computer Science Department University of Southern California Los Angeles, CA 90089-0781

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Photogrammetric mapping: introduction, applications, and tools GNSS/INS-assisted photogrammetric and LiDAR mapping LiDAR mapping: principles, applications, mathematical model, and

More information

Understanding Geospatial Data Models

Understanding Geospatial Data Models Understanding Geospatial Data Models 1 A geospatial data model is a formal means of representing spatially referenced information. It is a simplified view of physical entities and a conceptualization of

More information

IMPROVING 2D CHANGE DETECTION BY USING AVAILABLE 3D DATA

IMPROVING 2D CHANGE DETECTION BY USING AVAILABLE 3D DATA IMPROVING 2D CHANGE DETECTION BY USING AVAILABLE 3D DATA C.J. van der Sande a, *, M. Zanoni b, B.G.H. Gorte a a Optical and Laser Remote Sensing, Department of Earth Observation and Space systems, Delft

More information

Digital Preservation of the Aurelius Church and the Hirsau Museum Complex by Means of HDS and Photogrammetric Texture Mapping

Digital Preservation of the Aurelius Church and the Hirsau Museum Complex by Means of HDS and Photogrammetric Texture Mapping Master Thesis Ruxandra MOROSAN Ruxandra MOROSAN Digital Preservation of the Aurelius Church and the Hirsau Museum Complex by Means of HDS and Photogrammetric Texture Mapping Duration of the Thesis: 6 months

More information

BUILDING BOUNDARY EXTRACTION FROM HIGH RESOLUTION IMAGERY AND LIDAR DATA

BUILDING BOUNDARY EXTRACTION FROM HIGH RESOLUTION IMAGERY AND LIDAR DATA BUILDING BOUNDARY EXTRACTION FROM HIGH RESOLUTION IMAGERY AND LIDAR DATA Liang Cheng, Jianya Gong, Xiaoling Chen, Peng Han State Key Laboratory of Information Engineering in Surveying, Mapping and Remote

More information

A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS

A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS Robert Pâquet School of Engineering, University of Newcastle Callaghan, NSW 238, Australia (rpaquet@mail.newcastle.edu.au)

More information

QUALITY CONTROL METHOD FOR FILTERING IN AERIAL LIDAR SURVEY

QUALITY CONTROL METHOD FOR FILTERING IN AERIAL LIDAR SURVEY QUALITY CONTROL METHOD FOR FILTERING IN AERIAL LIDAR SURVEY Y. Yokoo a, *, T. Ooishi a, a Kokusai Kogyo CO., LTD.,Base Information Group, 2-24-1 Harumicho Fuchu-shi, Tokyo, 183-0057, JAPAN - (yasuhiro_yokoo,

More information

New Requirements for the Relief in the Topographic Databases of the Institut Cartogràfic de Catalunya

New Requirements for the Relief in the Topographic Databases of the Institut Cartogràfic de Catalunya New Requirements for the Relief in the Topographic Databases of the Institut Cartogràfic de Catalunya Blanca Baella, Maria Pla Institut Cartogràfic de Catalunya, Barcelona, Spain Abstract Since 1983 the

More information

EFFECTS OF DIFFERENT LASER SCANNING MODES ON THE RESULTS OF BUILDING RECOGNITION AND RECONSTRUCTION

EFFECTS OF DIFFERENT LASER SCANNING MODES ON THE RESULTS OF BUILDING RECOGNITION AND RECONSTRUCTION EFFECTS OF DIFFERENT LASER SCANNING MODES ON THE RESULTS OF BUILDING RECOGNITION AND RECONSTRUCTION Eberhard STEINLE, Thomas VÖGTLE University of Karlsruhe, Germany Institute of Photogrammetry and Remote

More information

Ground and Non-Ground Filtering for Airborne LIDAR Data

Ground and Non-Ground Filtering for Airborne LIDAR Data Cloud Publications International Journal of Advanced Remote Sensing and GIS 2016, Volume 5, Issue 1, pp. 1500-1506 ISSN 2320-0243, Crossref: 10.23953/cloud.ijarsg.41 Research Article Open Access Ground

More information

Surveying like never before

Surveying like never before CAD functionalities GCP Mapping and Aerial Image Processing Software for Land Surveying Specialists Surveying like never before www.3dsurvey.si Modri Planet d.o.o., Distributors: info@3dsurvey.si +386

More information

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY Jacobsen, K. University of Hannover, Institute of Photogrammetry and Geoinformation, Nienburger Str.1, D30167 Hannover phone +49

More information