FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

Size: px
Start display at page:

Download "FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES"

Transcription

1 FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES UMA SHANKAR PANDAY March, 2011 SUPERVISORS: Dr. M. (Markus) Gerke Prof. Dr. M. G. (George) Vosselman

2 FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES UMA SHANKAR PANDAY Enschede, The Netherlands, March, 2011 Thesis submitted to the Faculty of Geo-information Science and Earth Observation of the University of Twente in partial fulfilment of the requirements for the degree of Master of Science in Geo-information Science and Earth Observation. Specialization: Geo-informatics SUPERVISORS: Dr. M. (Markus) Gerke Prof. Dr. M. G. (George) Vosselman THESIS ASSESSMENT BOARD: Prof. Dr. M. G. (George) Vosselman(chair) Dr. K. (Kourosh) Khoshelham

3 Disclaimer This document describes work undertaken as part of a programme of study at the Faculty of Geo-information Science and Earth Observation of the University of Twente. All views and opinions expressed therein remain the sole responsibility of the author, and do not necessarily represent those of the Faculty.

4 ABSTRACT Automatic 3D reconstruction of buildings from Remote Sensing data has wide range of applications, e.g. mapping in cadaster context. If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS) data. This leads to inconsistent building outlines in cadastral systems. In addition, unrealistic 3D building models are obtained from these data sources. Oblique aerial image as opposed to nadir-view image reveals greater detail, enabling to see different views of an object taken from different directions. Building walls are visible from oblique images which are used for automated roof overhang estimation in this research. Self-occlusion is detected based on intersection result of viewing ray and planes formed by the building faces. A fitting algorithm is employed to find roof parameters of simple buildings. It uses a least squares algorithm to fit projected wire frame edges to their corresponding edge lines extracted from the images. Overhang and ground height are obtained by sweeping vertical and horizontal planes respectively. Approximate ground height is obtained from lower resolution nadir images followed by fine tuning with oblique images. Internal quality checks are performed and displayed as reliability information of the estimated parameters. Experimental results were verified with high resolution orthoimage, ALS data and field survey. Planimetric accuracy of 1cm mean and 5cm standard deviation was obtained, while buildings orientation were accurate to mean of 0.23 and standard deviation of 0.96 with orthoimage. Overhang parameters were aligned to approximately 10cm with field survey. The ground and roof heights were accurate to mean of 3cm and 8cm with standard deviations of 19cm and 8cm with ALS respectively. The developed approach reconstructs 3D building models well in cases of sufficient texture. Despite satisfactory results, it has to be noted that occlusion from close by objects has yet to be tested. More images should be acquired for completeness of overhang results. Keywords 3D reconstruction, parametric building model, fitting, plane sweeping, overhang estimation, oblique aerial images i

5 ACKNOWLEDGEMENTS I would like to take this opportunity to acknowledge the personalities and organization whose support and contribution remained priceless for me to accomplish this research and my MSc. First and foremost, I would like to express my sincere gratitude and acknowledgment with due respect to my first supervisor Dr. Markus Gerke for his scholarly advice, help and encouragement. This research would not have been possible to go in right directions without his caring guidance and support. My deepest appreciation goes to my second supervisor Prof. Dr. George Vosselman for his comments, feedback and suggestions. I highly appreciate my supervisors for those long discussions regardless of their busy schedules. I would like to thank course director Mr. Gerrit Huurneman and course coordinator Dr. Wietske Bijker for their advice and guidance. My heartfelt acknowledgment goes to Dr. Sander Oude Elberink for always helping me with laser part of my research. I would like to acknowledge Nuffic for providing scholarship to pursue MSc. I appreciate Blom Aerofilms for providing Pictometry images. I extend my appreciation to PhD students Adam and Emily for their help and lively discussions during my research. I would like to thank Langat and Meko for helping me to collect field survey data despite bad weather. I would like to appreciate Shashish for his help in making nice drawings. I am grateful to John, Nazanin, Erimina, Tiwari and Chekwube for their cooperation and eagerness to help each other during the research. Keshav, Srijana, Arun and Subash deserve my acknowledgment for proof-reading my proposal and thesis. I show my appreciation to Pukar and Upama for their assistance. Special thanks to all my classmates for sharing good 18 months together. I would also like to express sincere thanks to all Nepalese friends for providing homely environment during my stay in The Netherlands. Finally, I would like to express my deepest gratitude to my entire family members for their love, support and guidance for my whole life. ii

6 TABLE OF CONTENTS Abstract Acknowledgements i ii 1 Introduction Motivation and problem statement Research identification Research objective Research questions Innovation aimed at Thesis outline Literature review Building reconstruction Parametric building models Fitting algorithms Plane sweeping D reconstruction of buildings: an overview Oblique aerial images: an overview Research methodology Parametric building model Initial parameters and self-occlusion test Selection of initial parameters Self-occlusion test Precise estimation by fitting Overhang estimation by plane sweeping Plane sweeping for ground height estimation Experimental results and discussion Study area and data sets Oblique aerial images Nadir-view aerial images Airborne laser scanning data and orthoimage Roof parameters estimation Influence of nadir images in fitting results Overhang estimation Ground height estimation Main observations and discussion Roof parameters estimation Overhang estimation Ground height estimation iii

7 5 Conclusion and recommendations Conclusion Answers to research questions Recommendations iv

8 LIST OF TABLES 3.1 Summary of building parameters. Adopted from (Suveg and Vosselman, 2004) Specification of oblique images from Pictometry Specification of nadir images from Vexcel UltraCam D Parameters of straight line extracting algorithm Planimetric accuracy (with oblique images only) Roof height accuracy Buildings orientation accuracy (in degrees) Planimetric accuracy after inclusion of two nadir images Comparison of overhang with field survey measurement Ground height accuracy v

9 LIST OF FIGURES 1.1 Comparison of vertical and oblique images. Images: c Blom Parametric building models without overhang. Adopted from (Suveg and Vosselman, 2004) The homography induced by a plane. Source: (Hartley and Zisserman, 2003) Plane sweeping principle and epipolar line. A ray through x in the first view intersects planes π 1, π 2 and π 3 at points X π1, X π2 and X π3 respectively. The images of these points are coincident at point x in the left view. The images of the corresponding object points are x 1, x 2 and x 3 respectively in the right view. These points form an epipolar line I x in the right view. Modified from (Hartley and Zisserman, 2003) Plane sweep under translations along a principal scene direction. (a, b, c) show images superimposed by a homography map corresponding to translating a virtual scene plane. This scene plane is parallel with the left wall. The right-hand figures illustrate position of the wall and swept plane from a plan view. (d) shows a plot of the score function against translation. The circles correspond to the translations in (a, b, c), respectively. The middle translation is the one which best registers the planes, and this is visible in (b) where the plane of interest is most focused. Source: (Werner and Zisserman, 2002) Methodology adopted (a, b, c) showing parametric building models with overhang. (d) shows orientation of building in xy-plane Selection of unique reference point. The corner with black dot represents the selected reference point. Length (L), width (W) and azimuth (A) of the building are also shown Occlusion test conditions Viewing ray obstructed by infinite plane, but not by the building face itself An interior point making sum of angles of 2π with line end points of a polygon Comparison of the fitting approach with (Vosselman, 1998). Wire frame edges (in green), edge lines (in red), sampling points (with black dots) and buffer around wire frame (with black rectangle) are also shown Failure of fitting algorithm to determine large overhang. Wire frame edges (in green) and edge lines (in red) are shown Showing viewing angle (α ) with wall and angle between two views (β) Discarding poor results Quality of result from redundant information Four area sections around a building for ground height estimation Wire frame edges (in green) and edge lines (in red) extracted from images are projected to them. Blue dots on edge lines represent sampling points Wrong edge lines associated with vertical wire frame edges. Wire frame edges (in green) and edge lines (in red) are projected to oblique images. Sampling points (with blue dots) are also shown vi

10 4.3 Influence of nadir images in fitting process Comparison of overhang with field survey measurement Two rectified images from wall façade of a building at maximum correlation score (a, b). The correlation score (c) of the two images. Correlation is plotted against overhang for the façade (d) Visual verification of roof overhang estimation. Wire frame edges (in green) are projected to images after overhang estimation by plane sweeping Failure cases of overhang estimation by plane sweeping Two rectified images from ground surface around a building at maximum correlation score (a, b). The correlation score (c) of the two images. Correlation is plotted against ground height for the surface (d) vii

11 viii

12 Chapter 1 Introduction 1.1 MOTIVATION AND PROBLEM STATEMENT 3D reconstruction of buildings from Remote Sensing (RS) data has wide range of applications. The application areas include environment and city planning, city growth monitoring, transmitter placement for telecommunication, transportation, virtual tours of cities, cadastre systems, facilitating real-time situation awareness in urban areas and simulating natural and manmade events etc. (Englert and Gülch, 1996; Suveg and Vosselman, 2004; Poullis and You, 2009). To fulfill the application demands, computer models of buildings have to be obtained which will save time and no physical model will be required. 3D reconstruction of buildings from aerial imagery is an active research area in Computer Vision and Photogrammetry. Manual processing of aerial images is time consuming and requires highly qualified operator and expensive instruments. Therefore, several automatic and semiautomatic methods have been proposed in literature (Englert and Gülch, 1996; Gülch et al., 1999; Suveg and Vosselman, 2004). Suveg and Vosselman (2004) propose an automatic method for 3D reconstruction of buildings which integrates nadir aerial imagery with building footprint and domain knowledge. The developed system came up with results in both urban and suburban areas with accuracy good enough for mapping. However, only information in vertical direction and none for the sides of an object is obtained from nadir view aerial sources. Oblique aerial imagery as opposed to nadir-view imagery reveals greater detail (cf. figure 1.1), enabling to see different views of an object taken from different directions (Wang et al., 2008b). Building façades (including doors and windows) and protrusion are visible from oblique images directly which may be used for automated texture extraction (Zebedin et al., 2006; Grenzdörffer et al., 2007). This façade information can be used to estimate roof overhang and model walls of buildings. However, due to the special viewing direction all necessary information needed for reconstruction may not be visible in one image. Since we get multiple images from different views, the needed information can be found in images taken from other directions. The majority of building extraction strategies have two major stages: detection and reconstruction. Detection of buildings in imageries is essential before actual automatic building reconstruction begins. The former is used to locate buildings position, called region of interest (ROI), thus, constraining the search area for later stages of reconstruction. For this research, buildings approximate location as well as their roof type (e.g. flat, gable and hip) are already known. 3D reconstruction of buildings from oblique aerial images is a relatively new research area in Digital Photogrammetry. There is no software and tools for automatic reconstruction of buildings from oblique imagery. Thus, considering the problems of nadir aerial images and keeping in mind the advantages of oblique images, this research aims to develop a proper, accurate, reliable and fully automatic method for 3D reconstruction of single and simple buildings using oblique images. 1

13 (a) buildings from a vertical image (b) buildings from an oblique image Figure 1.1: Comparison of vertical and oblique images. Images: c Blom. 1.2 RESEARCH IDENTIFICATION As stated in problem statement, buildings reconstructed from nadir aerial image source do not have information from sides. They are constructed based upon top view information only. Therefore, those products, in one hand, do not model the walls and protrusion as they are in reality. Cadastre systems, on the other hand, need representation of walls at actual position. In this research, it is assumed that only approximate building parameters are known, i.e. building location is roughly given. The detected buildings have approximate reference point coordinates and orientation in a local coordinate system. Likewise, approximate volume of the building is given. In another words, rough coordinate points of building corners are provided. The proposed study is, therefore, focused on building reconstruction. The study will determine the precise values of building parameters e.g. reference point coordinates, length, width, overhang among others. Thus, it is aimed to develop an automatic method for producing complete and realistic 3D buildings model with real roof overhang and walls of the buildings at correct location. Quality of roof, walls and ground height of the buildings will be separately evaluated Research objective The primary objective of this study is to develop a method for automatic building reconstruction from oblique aerial images using parametric model Research questions To meet the research objective, the following research questions are formulated: How to extend parameters of a building model to accommodate roof overhang? How to find roof outlines and ridges and evaluate their quality? How to estimate roof overhang, find wall location and assess their quality? 2

14 How to determine height of building and evaluate its quality? Which methods should be used to test the developed algorithm? Innovation aimed at Automatic building reconstruction from oblique images using parametric model is the innovation of this research. An automatic method will be developed for obtaining precise parameters of simple and single buildings. Complete 3D reconstruction of buildings from oblique images has not yet been done in the field of Geo-information. 1.3 THESIS OUTLINE Chapter 1: Introduction This chapter includes motivation and problem statement. It also covers the objective and research questions, and innovation aimed at. Chapter 2: Literature review The chapter covers the concepts needed for this research and reviews literature on building reconstruction from aerial images, laser scanning and close range images. An overview of work done using oblique aerial images is also presented. Chapter 3: Research methodology The chapter is all about the developed methodology. It covers the methods to determine selfocclusion, and find precise roof parameters, overhang parameters and building height automatically. Chapter 4: Experimental results and discussion It explains the study area and used data sets. The chapter also elaborates some of the experimental results and discuss them. Chapter 5: Conclusion and recommendations Conclusion of the research, answers to the research questions and further recommendations are provided in this final chapter of this thesis. 3

15 4 FITTING OF PARAMETRIC BUILDING MODELS TO OBLIQUE AERIAL IMAGES

16 Chapter 2 Literature review The motive of this chapter is to give theoretical background needed for this research. The chapter starts with the description of building reconstruction methodologies. Parametric building models, fitting algorithms and plane sweeping technique are explained in Section 2.1. Section 2.2 presents an overview of 3D building reconstruction methods from different data sources such as aerial images, laser scanning and close range images. Finally, an outline of work done using oblique aerial images is described in Section BUILDING RECONSTRUCTION 3D reconstruction of buildings from airborne images has been a dynamic research area in Digital Photogrammetry for decades. Traditionally, airborne images were processed manually. Manual processing of airborne images is time consuming and requires highly skilled operators and expensive instruments. Therefore, significant efforts have been made in the recent years towards automatic 3D reconstruction of buildings. Numerous automatic and semi-automatic building reconstruction methods have been proposed in literature (Brenner, 2005; Haala and Kada, 2010). Automatic building extraction schemes consists of two steps: building detection and building reconstruction (Brunn, 1998) as cited in (Rottensteiner, 2001). Various data sources, for example aerial images, Digital Surface Models (DSM) obtained either from stereo images, Synthetic Aperture Radar (SAR) or laser scanning data, and 2D GIS data can be used for these purposes. While building detection deals with the methods and techniques of identifying region of interest (ROI) for subsequent reconstruction step, building reconstruction generates the 3D geometric description of the building itself. The former one is also known as building localization. A building reconstruction system requires an internal way of representing building models. Although buildings have variety of structures, most of them have similar and regular structures. Depending on the approach used for reconstruction, these regularities in building models can be represented either by an implicit set of rules applied during reconstruction step or by explicitly supplying model knowledge of buildings. The former one is called data driven or bottom-up approach where low-level features (e.g. points or edges) from images are extracted first. The homologous features are derived from the extracted ones having no semantics. Finally, domain knowledge of the object being reconstructed is applied for matching. The second approach is known as model driven or top-down approach where extraction of low-level features is followed by (building and) application of model knowledge. The object model is adjusted so that model features are matched with the image features. Boundary representation (B-rep), Constructive Solid Geometry (CSG) and primitive instancing are among the common building representation methods. Boundary representation is based on a surface oriented view of solid objects, i.e. an object is considered to be represented completely by its bounding faces. In addition, it consists of edges, vertices and topological relations of all the features. Constructive Solid Geometry (CSG) is generally used to describe complex objects that are formed from a set of simple primitives. 5

17 2.1.1 Parametric building models In primitive instancing or parametric building models, buildings are represented by a set of predefined building types. Each building type is represented by a set of parameters like length, width, height, position in a world coordinate system including others. Though, reconstruction is bounded by predefined building types, modeling becomes very fast and efficient for those buildings whose model is already defined. One can consider, for instance, a flat, a gable and a hip roof building as basic primitives. A parametric building model is described by two types of parameters. Shape parameters describe building model s geometry while pose parameters explain the location and orientation of the building in a given world coordinate system. Assuming that buildings are aligned horizontally, three shape and four pose parameters are needed to represent a flat roof building. Length, width and height are shape parameters. Pose parameters contain reference point coordinates (x, y, z) and orientation in xy-plane, generally called azimuth. The figure 2.1 shows models of a flat, a gable and a hip roof building. (a) a flat roof building (b) a gable roof building (c) a hip roof building Figure 2.1: Parametric building models without overhang. Adopted from (Suveg and Vosselman, 2004) 6

18 2.1.2 Fitting algorithms Starting with approximate shape and pose parameters value of a building model, one can estimate more precise parameters by fitting the model to remote sensing (RS) images (Sester and Förstner, 1989; Lowe, 1991; Fua, 1996; Vosselman, 1998). Sester and Förstner (1989) determine precise parameters in two steps: a probabilistic clustering method is used to find the approximate location of the projected model in the image, followed by robust estimation. Robust estimation method determines the final values. Both methods work on the matching result of projected model edges with the edges extracted from the image. The clustering algorithm is limited to determining only few number of parameters. Fitting of projected model edges to images by snake approach is used in Fua (1996). A parametric model is obtained from the images by accommodating the model parameters until the objective (energy) function is minimized. Geometric constraints (angle between parallel and perpendicular lines for example) are applied to safeguard the geometric properties of the model. A least square fitting approach is used by Lowe (1991). It finds the correspondences between the projected wire frame model and the edges already extracted from a set of images. The method minimizes the square sum of perpendicular distance of edge pixels to the nearest wire frame edge. This iterative least square adjustment method approximates the changes in parameters value that are enforced to minimize the square sum of these distances. Each pixel has unit weight in Lowe s fitting algorithm. His algorithm is modified by Vosselman (1998). In contrast to using only edge pixels in the fitting algorithm by Lowe (1991), Vosselman uses all the pixels within a buffer around the projected wire frame edges. To ensure that edge pixels (having higher gradient values) dominate the parameter estimation, Vosselman (1998) uses squared gray value gradient of the pixels as weight in the observation equation. An observation equation for each pixel can be written as E( u) = i=k i=1 u p i p i (2.1) and the weight of the pixel is determined as W ( u) = { } g 2 u (2.2) Where; u = perpendicular distance of a participating pixel to its nearest wire frame p i = parameters K = number of parameters p i = approximate change in i th parameter to be found out g = pixel intensity The optimization algorithm by Fua (1996) is computationally expensive and takes significant number of iterations before the best fit is achieved. The fitting algorithms by Sester and Förstner (1989) and Lowe (1991) are faster, however weak edge pixels may not participate in parameter estimation if the (weak) edges remain undetected by a line extraction algorithm. As all pixels within a buffer around projected wire frame lines are used for parameter estimation in (Vosselman, 1998), a large number of pixels participate at once. 7

19 2.1.3 Plane sweeping Translational plane sweeping together with cross correlation between images can determine the location of a world plane. Here, the world plane refers to a 3D plane formed by a building face e.g. a wall façade or a roof plane. It has three main steps: Iterative traversal of virtual plane (2D plane) Image rectification Multi-view correlation Images of points on a plane are associated to identical image points in another view by a projective relation called homography (Hartley and Zisserman, 2003). The homography induced by a plane maps points from first view to the second and vice-versa as they are images of points on the same plane. A ray through point x on the first image plane is prolonged to meet world plane π at a point X π. This point X π if projected to other image plane will lie at a point x. The mapping of x to x leads to homography induced by the plane π. The perspectivity between the world plane and the first image plane is x = H 1π x π and perspectivity between the world plane and the second image plane is x = H 2π x π. These perspectivities results in a homography x = H 2π H2π 1 x π = Hx between the two image planes (cf. figure 2.2). Figure 2.2: The homography induced by a plane. Source: (Hartley and Zisserman, 2003) A point in one view forms a line in the other view through epipolar geometry. The formed line is known as epipolar line. The epipolar line is the image of the ray through the point in the first view. If a plane is at the correct depth, the intensities at the corresponding pixels will be highly correlated. Thus, a correct plane ( π 1 in figure 2.3) leads to a match between the relevant parts of the two views, leading to maximum correlation between the images of the parts. 8

20 Figure 2.3: Plane sweeping principle and epipolar line. A ray through x in the first view intersects planes π 1, π 2 and π 3 at points X π1, X π2 and X π3 respectively. The images of these points are coincident at point x in the left view. The images of the corresponding object points are x 1, x 2 and x 3 respectively in the right view. These points form an epipolar line I x in the right view. Modified from (Hartley and Zisserman, 2003) 2.2 3D RECONSTRUCTION OF BUILDINGS: AN OVERVIEW Quite a number of systems have been proposed for reconstruction of man-made objects. These systems can be classified based on the data sources they use, the object model and the type of operation: manual, semi-automatic or fully automatic. Since manual and semi-automatic systems require operator s interaction, they are time consuming and expensive. Hence, nowadays research is more focused on developing automatic reconstruction methods. Considerable research studies have been carried out to automatically reconstruct buildings from nadir-view airborne images e.g. (Haala, 1996) as cited in (Brenner, 2005), (Fischer et al., 1998; Baillard and Zisserman, 1999). Similarly, substantial work is done towards building reconstruction from airborne laser scanning (ALS) data e.g. (Vosselman, 1999; Oude Elberink and Vosselman, 2009; Oude Elberink, 2010). To reduce the complexity of work, several researchers have used ground plans of buildings together with either image data e.g. (Suveg and Vosselman, 2004) or laser data e.g. (Vosselman and Dijkman, 2001). In one hand, reconstruction from images has low degree of automation. On the other hand, smaller planar segments are not detected well from laser data. So some approaches have been presented which combines the abilities of both data sets. Rottensteiner and Briese (2003) suggest to use aerial images to detect small planar segments and to fit wire frame models (derived from laser data) to airborne images to improve geometric accuracy of the final model. A large number of buildings are simple and they have parameterized standard roof shapes (flat, gable and hip shaped roofs are some examples). Object reconstruction using parametric model is presented in (Vosselman, 1998; Scholze et al., 2002; Suveg and Vosselman, 2004). Suveg and Vosselman (2004) propose to have object s reference point (x, y, z) and orientation as pose parameters. In addition, they have a number of shape parameters depending on building roof type (cf. figure 2.1). An automatic method of reconstructing 3D planar faces from multiple images of a scene is described in (Baillard and Zisserman, 1999). The method is based on inter-homographies of six images. A single 3D line surrounding with texture is sufficient to form a plane hypothesis. Angle 9

21 of the plane is estimated by rotational plane sweeping. The maximum correlation value between views corresponds to the angle of the plane. A virtual plane is translated with small step size in translational plane sweeping. In contrast to this, the plane is rotated by a small specified angle in rotational plane sweeping. Since quasi-nadir images are used as input, only roof planes are reconstructed. 3D models of buildings are also obtained from close range images. Translational plane sweeping is applied to a number of close range images for determining building façades (Werner and Zisserman, 2002). The coarse polyhedral model is obtained by translational plane sweeping followed by refinement with rectangular block (for doors and windows) and wedge block (for dormers) fitting. The figure 2.4 illustrates how a vertical wall location is determined using translational plane sweeping in (Werner and Zisserman, 2002). Image based optimization method is employed in (Zebedin et al., 2006) for precise estimation of building façades using Digital Surface Models (DSM) generated from aerial images. They use homography based translational and rotational plane sweeping. A review of building reconstruction methods from airborne images and laser scanning is presented in (Brenner, 2005) and automatic building reconstruction methods are summarized in (Haala and Kada, 2010). 10

22 Figure 2.4: Plane sweep under translations along a principal scene direction. (a, b, c) show images superimposed by a homography map corresponding to translating a virtual scene plane. This scene plane is parallel with the left wall. The right-hand figures illustrate position of the wall and swept plane from a plan view. (d) shows a plot of the score function against translation. The circles correspond to the translations in (a, b, c), respectively. The middle translation is the one which best registers the planes, and this is visible in (b) where the plane of interest is most focused. Source: (Werner and Zisserman, 2002) 11

23 WHY DO WE NEED OBLIQUE AERIAL IMAGES? If a roof overhang is present, the building walls cannot be estimated correctly from nadir-view aerial images or airborne laser scanning (ALS) data. This leads to inconsistent building outlines in cadastral systems. Airborne laser scanning data can be fused with mobile laser scanning (MLS) or terrestrial laser scanning (TLS) data to reconstruct buildings with consistent wall location. Automatic extraction of vertical walls from ALS and MLS data is presented in (Rutzinger et al., 2009). However, this approach has the following limitations: One needs to have at least two different data sources which are expensive. Only street views are available from MLS and TLS. In most cases, obtained models have to be visualized with proper building texture. Either oblique images or a combination of nadir-view aerial and terrestrial images will be needed. Acquisition of terrestrial images is difficult and time consuming. Furthermore, one can get terrestrial images from street views only. Since consistent textured models can be obtained from oblique aerial images alone, we believe that oblique images are the optimal data source to obtain consistent, textured and inexpensive 3D model of buildings. However, quality of texture obtained from oblique images are worse compared to terrestrial images. 2.3 OBLIQUE AERIAL IMAGES: AN OVERVIEW Though oblique images have been used for texturing and visualization for many years (Früh et al., 2004; Grenzdörffer et al., 2007; Wang et al., 2008a), their use for measurement purpose is relatively new. Height calculation of an object from an oblique image is described in (Höhle, 2008). A dense matching method in oblique images is shown in (Gerke, 2009). While Mishra et al. (2008) use image descriptors like color, intensity gradients, and texture, Nyaruhuma et al. (2010) employ clues such as building edges, wall façades and texture present in oblique images to verify 2D vector data sets. Gerke (2010) incorporates scene constraints e.g. linear horizontal, vertical and right-angle into the bundle block adjustment of image sequences to automatically determine exterior and interior camera parameters. Automatic detection of rectangular flat roof buildings from multi-view oblique imagery is presented in (Xiao et al., 2010). They apply plane sweeping technique to identify building roof type. Hierarchical searching step for plane sweeping together with cross correlation is used to obtain height of a flat roof building. 12

24 Chapter 3 Research methodology The chapter presents a step by step explanation of the approach employed in this research. An extended parametric building model is described in Section 3.1. Section 3.2 covers selection of initial parameters and self-occlusion test in oblique images. Section 3.3 explains the developed fitting algorithm. Overhang estimation by plane sweeping technique is described in Section 3.4. In the end, ground height estimation by sweeping a horizontal plane is covered in Section 3.5. Figure 3.1 shows the overall research approach. Figure 3.1: Methodology adopted 13

25 A three step building reconstruction approach is adopted in this research. First, roof of a building is reconstructed by fitting algorithm. Plane sweeping technique is utilized to estimate roof overhang of the building in the next step. Finally, ground height and thus, the building height is obtained by sweeping a horizontal plane. Ideally, wire frame fitting can be used for roof overhang estimation as well. As numerous edge lines are extracted from fence, shadow and close by buildings, fitting algorithm heads to wrong overhang estimation. On the other hand, plane sweeping technique is computationally expensive as well as it does not determine the extent of roof planes. Rather, it can only estimate height and angle of roof planes (in case of gable and hip roof buildings). 3.1 PARAMETRIC BUILDING MODEL A building reconstruction system requires an internal representation of building models. Regularity and similarity can be seen in simple buildings. Their shape can generally be represented with few parameters like length, width, height among others. A flat roof, a gable roof and a hip roof building are considered as basic/simple building types. A parametric building model can be described by two type of parameters: i) shape parameters, and ii) pose parameters (cf. Chapter 2 for details). As overhangs are not modeled by existing systems, they have no parameters to represent building s overhang. Thus, we extended parameters of building model by Suveg and Vosselman (2004) so that complete building with overhang can be estimated. As symmetric properties are exploited in parametric building models, one parameter to represent overhangs in main direction (mo) and the other to represent overhangs from sides (so) are applied. The table 3.1 summaries building type, number of required parameter and the parameters themselves. The building models and their parameters are shown in figure 3.2. Table 3.1 Summary of building parameters. Adopted from (Suveg and Vosselman, 2004) building type no of parameter shape parameters flat roof 9 length, width, height, main and side overhangs gable roof 10 length, width, height, ridge height, main and side overhangs hip roof 11 length, width, height, ridge height, ridge length, main and side overhangs pose parameters reference point coordinates (x, y, z) and azimuth reference point coordinates (x, y, z) and azimuth reference point coordinates (x, y, z) and azimuth 3.2 INITIAL PARAMETERS AND SELF-OCCLUSION TEST Selection of initial parameters As building detection is not part of this research, initial approximate parameters are given (except for overhangs which are initially assumed to be zero) to the system. If unique reference point is not explictly given, it has to be found out from wire frame which needs special attention as it is 14

26 (a) a flat roof building (b) a gable roof building (c) a hip roof building (d) building orientation in xy-plane Figure 3.2: (a, b, c) showing parametric building models with overhang. (d) shows orientation of building in xy-plane. needed later for conversions from parameters to building corners point and vice-versa. Also the building orientation with proper sign must be picked. To select unique reference point and proper orientation, the following technique is used: a test is made to check how many of the four corners of a building at reference height falls in a particular quadrant. If only one corner point falls in second quadrant, it is taken as the reference point. The distance between reference point and the corner point in the first quadrant gives length of the building. If two corners are in second quadrant, left corner is taken as reference point and the distance from this to the right one gives length. If no corners lie in the second quadrant, there will be two corners in the third quadrant. The left one is taken as reference and distance from this to the left corner in the first quadrant gives length. The orientation of the building (generally known as azimuth) is taken as the angle made by building edge in length direction with y-axis. All the possible cases are shown in figure Self-occlusion test When an object part or the object itself is invisible from a certain view, the invisible part/object is said to be occluded in the image from that view. There are mainly two types of occlusion: i) selfocclusion in which part of an object (building) is obstructed/blocked by other parts of the same object (the building itself), ii) next occlusion type, which is not treated here, is caused by nearby objects like other buildings and trees. If self-occlusion is not detected, edge lines (lines extracted from images) from nearby objects might be paired with the wire frame edge which is invisible in the image. Thus, edge lines corresponding to occluded wire frame edge from the image should 15

27 (a) one corner in second quadrant (b) two corners in second quadrant (c) two corners in third quadrant Figure 3.3: Selection of unique reference point. The corner with black dot represents the selected reference point. Length (L), width (W) and azimuth (A) of the building are also shown. not be used in the fitting algorithm. Use of those lines may lead to wrong fitting results. No part of a building below the roof is visible in nadir-view aerial images. In this sense, nadir images suffer a lot from self-occlusion. But no edge line corresponding to vertical wire frame edges is obtained from these images in this research. At the same time, all roof wire frame edges are visible in them. Thus, self-occlusion test is unimportant for nadir images here. In contrast to nadir images, some roof wire frame edges may be invisible in oblique images and we may get edge lines for those invisible wire frame edges as well. This should be avoided as inclusion of such wrong edge lines in the fitting algorithm may head to wrong results. Hence, this test is of paramount importance for oblique images. A wire frame edge is self-occluded if at least one of the planes formed by faces of the building is between the wire frame edge being tested and the camera center that obstructs it (the camera) from viewing the wire frame edge. The intersection points of the face planes with viewing ray determine whether the wire frame edge is occluded in the image. The method to test for self-occlusion of wire frame edges works as follows: a plane for each building face is formed. A line called viewing ray is defined from camera center at the moment of image acquisition to the midpoint of wire frame edge. Intersection point of the plane with the viewing ray is checked. If the intersection point lies between the camera center and the midpoint of the wire frame edge being tested, the wire frame edge is occluded (cf. figure 3.4). The test is repeated for all faces of the building per wire frame edge. If a wire frame edge is occluded from any face of the building, it is taken as occluded wire frame edge. One has to be very careful to check that the intersection point lies within the boundary of the building face. As a plane is infinite, it can intersect the viewing ray in between the camera center and the midpoint of the wire frame edge even when building face does not obstruct the wire frame edge at all (cf. figure 3.5). Hence, two conditions have to be satisfied for a wire frame edge to be occluded: i) the intersection point must lie between the camera center and midpoint of the wire frame edge being tested, and ii) the intersection point must be interior to the face boundary. The sum of angles made between the intersection point and each pair of points making up the polyhedron/face is tested to determine if the intersection point is interior to the face boundary. If the sum is 2π, the intersection point is on the plane and is interior to the face boundary. The sum tends to be zero further away the point becomes or is exterior. Since the intersection point will 16

28 Figure 3.4: Occlusion test conditions Figure 3.5: Viewing ray obstructed by infinite plane, but not by the building face itself definitely be on the plane, one does not need to test for it. Therefore, sum of angles determines whether the point is interior or exterior. Thus, whenever the sum of the angles is 2π, the point is interior. A point inside polygon test is shown in figure 3.6. Figure 3.6: An interior point making sum of angles of 2π with line end points of a polygon 17

29 3.3 PRECISE ESTIMATION BY FITTING We aim to have precise values of a building parameters whose approximate values are already given as well as newly introduced parameters for overhang. Fitting algorithm can be used to fine tune the parameters. The algorithm fits roof wire frame edges to their corresponding edge lines extracted from oblique and/or nadir aerial images. It is an iterative least square method which approximates the changes in parameters value that are enforced to minimize the squared sum of perpendicular distances between edge lines and their nearest wire frame edge. The wire frame edges formed from initial approximate parameters are projected to images in which the building is visible. Edge lines are extracted from images using Burns straight line extraction algorithm (Burns et al., 1986). It is an effective straight line extracting method that works based on gradient orientation rather than the magnitude itself. It exploits the global context of intensity variation associated with a line. Perpendicular distance of edge line center to every visible wire frame edge is calculated. Similarly, angles between visible wire frame edges and extracted edge lines are also computed. The extracted edge lines that are within some distance and angle thresholds with one of the visible wire frame edges are selected and fed to fitting algorithm for fine tuning the building parameters. As we are dealing with straight edges only, we do not need to use every edge pixels for fitting. We take every n th edge pixels as sample points and use their perpendicular distances to the nearest wire frame edge as observations (cf. figure 3.7b). Though two end points of the lines should be sufficient, this indirectly represents the length of edge lines so that longer edge lines have more influence in the fitting results. Furthermore, this delivers more observations from longer lines which establish a robust system. A linearized observation equation for each sampling point on the edge lines can be produced. For a sampling point j, the observation equation can be written as Where; E( u j ) = K i=1 u j p i p i (3.1) u j = perpendicular distance between the point j and its nearest wire frame p i = parameters K = number of parameters p i = approximate change in i th parameter to be found out This algorithm is different from the algorithms by Lowe (1991) and Vosselman (1998). We use every n th edge pixels in the fitting algorithm whereas Vosselman (1998) employs all pixels within a buffer around wire frame edges. Lowe (1991) uses all edge pixels. As only edge pixels participate in our fitting algorithm, unit weights are employed to them which is similar to (Lowe, 1991). In contrast to this, Vosselman (1998) uses gradient magnitude of the pixels as weight to the observation equation as both edge and non-edge pixels participate in his algorithm. The fitting algorithm by Vosselman (1998) is depicted in figure 3.7a. Wire frame edges are shown in green. A buffer (shown by black rectangle) around the wire frame edges is used to select the participating pixels. All the pixels within the buffer contribute to the algorithm. The fitting algorithm used in this research is demonstrated in figure 3.7b. Red lines represents the edge lines extracted from the image. The perpendicular distance of every n th edge pixels (in black) to their nearest wire frame edge is used as observation. Compared to (Vosselman, 1998), our algorithm uses less pixels. 18

30 (a) fitting algorithm by Vosselman (1998) (b) fitting algorithm used in this research Figure 3.7: Comparison of the fitting approach with (Vosselman, 1998). Wire frame edges (in green), edge lines (in red), sampling points (with black dots) and buffer around wire frame (with black rectangle) are also shown. The use of every n th edge pixels is not only sufficient to model objects with straight edges but also generates enough observations to make the model robust. Use of either all pixels around the wire frames or all the edge pixels are unnecessary for modeling objects with straight edges. These approaches will probably add processing overload without improving the result much in this scenario. It has to be noted that these algorithms (Lowe, 1991; Vosselman, 1998) are designed for general use and are not limited to fitting objects model made up of straight lines. Though roof overhang can be estimated theoretically from fitting, lot of edge lines corresponding to vertical wire frame edges are obtained. As many of them belong to either shadows and fences or edges of close by buildings, overhangs estimated from fitting algorithm are mostly inaccurate. Simultaneously, wall edges can not be obtained in case the overhang is larger than the distance threshold applied for extraction of edge lines around the projected wire frame edges. Increasing the distance threshold (buffer around the projected wire frame edges) increases the number of wrong edge lines which misguide the algorithm to incorrect matches. Thus, fitting algorithm is unsuitable for roof overhang estimation in areas where buildings are quite closely spaced and/or the buildings with large overhangs. The figure 3.8 shows a building with large overhang. Initial wire frame edges (in green) are projected to image and edge lines within a threshold are extracted (in red). As overhang is larger than the threshold specified, only edge lines from roof and none from the building walls are extracted. Some of the oblique images can be reserved for internal accuracy check. The reserved images are not used for fitting. Wire frame edges obtained after precise estimation are projected to these images. The residuals between the projected wire frame edges and their corresponding edge lines 19

31 Figure 3.8: Failure of fitting algorithm to determine large overhang. Wire frame edges (in green) and edge lines (in red) are shown. from reserved images give an internal quality measure. However, it requires to have sufficient number of images from different perspectives. Number of iteration taken to converge the system is taken as another internal quality measure. If parameters are not converged within a predefined number of iteration, the fitting results are considered unacceptable. 3.4 OVERHANG ESTIMATION BY PLANE SWEEPING Roof overhangs of a building are found out by sweeping vertical planes. Starting at roof edge obtained from fitting, a vertical plane parallel to a wall façade is translated in the direction perpendicular to the wall. The correlation score is stored at every hypothesized position of the wall façade which is analyzed further to determine the wall location (cf. Chapter 2 for details). Location of the wall just determined and roof information are used to obtain overhang. Similarity between two images is calculated by normalized cross correlation method. Starting at the image origin, a window of n n is moved by one pixel every time and cross correlation coefficient for each window position is calculated. Cross correlation coefficient r for windowsbased local matching of two images X and Y is computed using equation 3.2. Number of window position with correlation coefficient greater than a threshold is counted. This is repeated until whole image is covered. Finally, number of window position with coefficient value greater than a threshold is divided by total number of window moves to have correlation score between the images. The coefficient calculated by this method is not sensitive to linear changes of brightness 20

32 Figure 3.9: Showing viewing angle (α ) with wall and angle between two views (β) and contrast. r = i [(X i X) (Y i Ȳ )] i (X i X) 2 i (Y i Ȳ )2 (3.2) Where; X i = intensity of i th pixel in image X Y i = intensity of i th pixel in image Y X = mean intensity of image X Ȳ = mean intensity of image Y If images are captured from exactly the same surface, the corresponding pixels in the images will have similar intensity values. Thus, correlation coefficients for each window will be higher leading to high correlation ratio/score. If part of the images cover different area, the correlation coefficients for those area will be small and thus, results to low correlation score. As all walls of a building are not visible in an oblique image, a test is made to check which walls are visible in a particular image. For a wall to be visible, both of its outermost edges must be visible. So if both wire frame edges of a wall are seen in an image, the wall is visible in the image. If either of them is unseen from the camera center, the wall is invisible. The self-occlusion test described in Section is used to decide the visibility of a wire frame edge in an image. Images acquired from small viewing angle (α) with the wall result in heavily distorted rectified images (cf. figure 3.9). Moreover, very large angle between image views (β) is unfavorable for matching. In addition, very small angle between views leads to poor epipolar geometry. Thus, image pairs must be selected based on some angle thresholds. The images from good perspectives help to achieve better correlation score. After visibility test, two rectified images of a wall from different view oblique images are obtained. Starting from zero overhang, a vertical plane parallel to the wall façade is translated inwards and correlation score between the images are stored. This process is repeated for all the image pairs in which the façade is visible. Since the overhang determined from a flat correlation curve is unreliable, a test is made to check whether the image pairs have a clear peak. The percentage change of maximum correlation maximum correlation nearest point correlation maximum correlation score with its nearest neighbor on both sides i.e. is employed as a measure of peak. Since translation step size is constant, it does add no value to divide by it. Rather dividing by the maximum correlation itself helps to normalize the value between zero to one. This will assist to select threshold more easily. Results from image pairs having no 21

33 clear peak in the correlation score are dropped. Also image pairs with correlation scores below some threshold are not considered any further. The remaining image pairs are eligible pairs from which the overhang is determined. Figure 3.10: Discarding poor results Correlation results of a wall from three image pairs are plotted in graph Image pair C does not have a clear peak where as Image pair A has very low correlation scores. Because overhang can not be determined reliably based on these results, results from these image pairs are neglected. As Image pair B has clear peak as well as high maximum correlation score, overhang is determined from this pair. As symmetry is exploited in parametric building models, same amount of overhang from opposite sides of the building is assumed. In another words, the equal roof overhangs from front and rear of the building are presumed. Similarly, same roof overhangs from left and right sides are assumed. Thus, this process is repeated for the opposite wall as well and a single overhang parameter is determined using all observations from both walls. The position of the plane corresponding to the maximum image correlation score among all eligible image pairs from front and rear walls gives the location of the walls and thus the main overhang. Ideally, all eligible image pairs should result the same overhang from one direction (e.g. front and rear). However, due to different perspectives and occlusion, slightly different amount of overhang can be obtained from different eligible image pairs. The difference of estimated overhang with those determined from other eligible pair is used as an internal quality measure. The graph 3.11 shows matching scores of a wall from two image pairs. The maximum scores from both image pairs are above some threshold and have clear peaks. Therefore, both image pairs are eligible. The outcome from Image pair A is taken as roof overhang because it has the highest score. The difference between overhangs obtained from these image pairs (10cm in this example) is displayed as an internal quality measure of the process. Likewise, the process is repeated for side overhang which is obtained from left and right building walls. 3.5 PLANE SWEEPING FOR GROUND HEIGHT ESTIMATION Four horizontal planes around a building (cf. figure 3.12) are sweeped in vertical direction to obtain ground heights in a fashion similar to finding wall location. As parametric buildings are represented by a single height, the minimum ground height of four area sections is taken to compute building height. 22

34 Figure 3.11: Quality of result from redundant information Approximate ground height is computed from comparatively lower resolution nadir-view aerial images first. The process is repeated for four sections with oblique images around the ground height determined with nadir images to receive their precise values. Sweeping range for oblique images is decided based on resolution difference of nadir and oblique images. The minimum of precise ground heights is employed to calculate the building height. It has two advantages over direct computation from oblique images: Nadir-view images suffer less from occlusion This approach is computationally less expensive because sweeping with oblique images has to be repeated for short range around the height obtained from nadir images. As many oblique image pairs exist for an area, it will be computationally expensive if ground height has to be determined with these images around an unknown height. However, failure of nadir images to estimate approximate ground height leads to unsuccessful results. The process of determining ground height is same as overhang estimation described previously in Section 3.4. Instead of vertical plane, a horizontal plane is employed for sweeping. As nadir Figure 3.12: Four area sections around a building for ground height estimation 23

35 images are utilized to provide only approximate ground height, peak test is not performed on results of these images. Similar to overhang, the differences between the ground heights of same area section determined from different eligible oblique image pairs are expended as an automatic quality measure. Since ground height estimation suffers mostly from occlusion by other objects (which is not treated in this research), this test is not explicitly performed here. 24

36 Chapter 4 Experimental results and discussion This chapter explains study area, used data sets, presents some of the results and discuss them. Section 4.1 describes the images, airborne laser scanning data and orthoimage. In Section 4.2, results of roof parameter estimation are presented. Section 4.3 demonstrates output of roof overhang estimation where as in Section 4.4, results of ground height estimation are elaborated. In the end, main findings are discussed in Section STUDY AREA AND DATA SETS The study area is located in north of Enschede, The Netherlands. The area is populated with flat, gable and hip roof buildings. Oblique aerial images are the main data source for this research. Nadir-view aerial images were used to find approximate ground height. High resolution ortho-image was employed to verify planimetric accuracy of the results obtained where as roof overhangs were verified by field survey measurements. Accuracy of the third dimension was assessed with airborne laser scanning (ALS) data Oblique aerial images Oblique images used for this study were acquired in February 2007 by Pictometry Inc. (Blom Aerofilms). In addition to four oblique images, a nadir image is also captured at once, all from small frame cameras. The images are taken from nadir, forward, backward, left and right orientations. A scene is captured in multiple overlapping images from different views. Only oblique images were available to us. The specification of the images is listed in table 4.1. All images used in this research were oriented using (Gerke, 2010). After the self-calibration and bundle block adjustment, the RMSE of 20cm was found at check points. Table 4.1 Specification of oblique images from Pictometry characteristic value flying height (m) 920 baseline (m) 400 focal length (mm) 85 sensor size (mm) pixel size (µm) 9 tilt (degrees) 50 ground sampling distance-gsd (cm)

37 Table 4.2 Specification of nadir images from Vexcel UltraCam D characteristic value flying height (m) 1200 baseline (m) 800 focal length (mm) 101 pixel size (µm) 9 ground sampling distance-gsd (cm) Nadir-view aerial images As nadir-view aerial images from Pictometry were unavialable for this research, images that were captured in March 2008 from UltraCam D was used instead. Only resampled images were available. The pixel size after resampling was 30cm. The specification of the original nadir images is listed in table Airborne laser scanning data and orthoimage Ground height and the building height were assessed with ALS data. These data were acquired in March The average point density of the laser data is 20pts/m 2 (Vosselman, 2008) with height accuracy of 10cm. The orthoimage used in the study was produced with images captured from Vexcel Ultra- Cam D. The full resolution orthoimage with nominal Ground Sampling Distance (GSD) of 11cm was available. The mean deviation of 12cm with Global Positioning System (GPS) measurements was found. The specification of the original images is given in Section ROOF PARAMETERS ESTIMATION First rough building models were reconstructed from ALS data using Point Cloud Mapper (PCM) software and corner points of the reconstructed buildings were saved in text files. This was done for building detection, which is not part of this research. Approximate values for parameters were computed from approximate corner points coordinates which were taken as initial parameters. 3D wire frames were created from these initial parameters. They were projected to images using camera orientation information of the images. The edge lines were extracted using Burns straight lines extraction algorithm (Burns et al., 1986). The parameters of the algorithm was tunned so that enough lines, even those with low contrast were obtained. The parameters value used for the algorithm are summarized in table 4.3. Edge lines within a distance of 4pixels and an angle of 10 around the projected roof wire frame edges were selected. Perpendicular distance of every 5 th pixel on the edge lines to their nearest wire frame edge was employed as an observation to the fitting algorithm. It determined all roof parameters e.g. length, width etc. of building s roof. 26

38 Table 4.3 Parameters of straight line extracting algorithm parameter value bucket width 8 gradient mask 0 minimum no of pixels 85 minimum magnitude 2 vote 0.5 Planimetric accuracy of the fitting results were verified against high resolution ortho-image. The specification of the ortho-images is given in Section Length and width of buildings were selected as representatives of roof parameters. The length and width of twenty six buildings from the test site were assessed. Roof parameters of one building were unacceptable while the algorithm completely failed to find matches of wire frame edges to their edge lines for another one. Rest of twenty four buildings roof parameters were satisfactorily obtained. The mean and standard deviation of length were 0cm and 5cm while that of width were 2cm and 5cm respectively. Based on oblique images resolution and quality of reference data set, the accuracy of these building parameters is regarded as acceptable. The report of individual building is provided in table 4.4. The two buildings discussed above are dropped from the table. Also few buildings were occluded by vegetation which are not in the list. Some measurements were not possible in the orthoimage, which are shown by a dash symbol (-) in the table. The first character of the building code represents building type (F: a flat roof, G: a gable roof, and H: a hip roof building), followed by a number in each type. Table 4.4 Planimetric accuracy (with oblique images only) building code length (m) width (m) result orthoimage result - ortho result orthoimage result - ortho F G G G G G G G G G G G G G G G G G G G H H H H mean 0.00 mean 0.02 std. dev std. dev

39 Roof heights of the buildings were evaluated with the laser data. The mean height difference of 8cm was found with the laser data. The standard deviation was 8cm. This accuracy is acceptable based on the qualities of reference laser data and the oblique images. From sampling of twenty six buildings, roof height of one building was unacceptable. Some of these buildings resulted the accuracy around 1pixel. Those buildings suffered from many-to-many line assignment as discussed in Section Those buildings were visible in 1-2 images in which many-to-many line assignment took place. Comparison of estimated height with the reference laser data is presented in table 4.5. Table 4.5 Roof height accuracy building code result (m) laser (m) result - laser (m) F G G G G G G G G G G G G G G G G G G G G H H H H mean 0.08 std. dev Buildings orientation were verified against high resolution orthoimage. The mean and standard deviation of buildings orientation in xy-plane were found to be 0.23 and 0.96 with the orthoimage respectively. The comparison of individual building s orientation with that measured from high resolution orthoimage is listed in table 4.6. Other roof parameters like ridge height and length were not assessed. Since their representative parameters (both planimetric and height) were already evaluated, explicit assessment of these parameters is unnecessary. Because buildings are shifted in an ortho-image, point coordinate measurement in an ortho-image is unsuitable for accuracy assessment of absolute positions. On the 28

40 Table 4.6 Buildings orientation accuracy (in degrees) building code result ortho result - ortho F G G G G G G G G G G G G G G G G G G G H H H H mean 0.23 std. dev other hand, accuracy of 2D maps are no better than that of the oblique images used in this research. Buildings outline normally suffer a shift of 15 20cm. At the same time, buildings are generalized in maps. Moreover, newly built buildings were unavailable in the map. Therefore, reference point coordinates of buildings were not verified. As sufficient number of oblique images were unavailable, automatic quality measure based on residuals between wire frame edges and their corresponding edge lines from images was not performed. Instead, number of iteration required to complete the fitting was used as the internal quality measure. Whenever, number of iteration exceeded fifteen, the parameters for the building were not accepted. Projected roof wire frame edges and their corresponding edge lines of a building are shown in figure 4.1. In figure 4.1a, initial roof wire frame edges of the building were projected to an image. The wire frame edges are not fitting with their corresponding edges. After finding precise parameters, the wire frame edges were projected to the original images again. Projected wire frame edges are nicely fitting to the edges of the building s roof in figure 4.1b. Because many edge lines from fence, shadow or other close by buildings were associated with vertical wire frame edges, roof overhang could not be accurately determined by fitting algorithm. Figure 4.2a shows a hip roof building with many edge lines from a neighboring building. Numer- 29

41 (a) initial wire frame edges (b) wire frame edges after fitting Figure 4.1: Wire frame edges (in green) and edge lines (in red) extracted from images are projected to them. Blue dots on edge lines represent sampling points. ous edge lines from a façade of a gable roof building are shown in figure 4.2b. In figure 4.2c, edge lines from roof of a nearby building are associated with vertical wire frame edges. (a) edges from other building (b) wrong edges from same building (c) edges from roof of other building Figure 4.2: Wrong edge lines associated with vertical wire frame edges. Wire frame edges (in green) and edge lines (in red) are projected to oblique images. Sampling points (with blue dots) are also shown Influence of nadir images in fitting results To check the influence of nadir images in the fitting process, two nadir images were added to the system with oblique images only. The parameters before and after inclusion of these images were compared with the ortho-image. As resolution of nadir images were lower than that of oblique images, the accuracy of parameters decreased slightly. The length parameter had mean of 5cm and standard deviation of 7cm. Mean and standard deviation of width were 6cm each. The complete report is shown in table

42 Table 4.7 Planimetric accuracy after inclusion of two nadir images building code length (m) width (m) result orthoimage result - ortho result orthoimage result - ortho F G G G G G G G G G G G G G G G G G G G H H H H mean 0.05 mean 0.06 std. dev std. dev Roof parameters of one building were not found correctly from oblique images while the method completely failed to fit wire frame edges to their edge lines for another building. Because the wire frame edge at the back of the building was just visible and was within the distance threshold from the main ridge wire frame edge, the edge lines belonging to the wire frame edge were paired to both the wire frame edges (cf. figure 4.3a). Similarly, the edge lines from the main ridge were paired to both of them. This was the case in three out of five oblique images in which the building was visible. Inclusion of two nadir images in the system added sufficient number of good observations and thus, the method was able to estimate roof parameters. The wire frame edges and their edge lines fitting after inclusion of nadir images is shown in figure 4.3b. It has to be noted that the nadir images only contributed some good observations. Therefore, the method got enough good observations from the whole system and surpassed wrong many-tomany line assignment from those three oblique images. It could have been done by other oblique images from good viewing perspectives if they were available. Therefore, it can be said that nadir images added robustness to the system. However, overall accuracy was reasonably reduced due to lower resolution of these images. 31

43 (a) wrong result due to many-to-many line assignment (b) after inclusion of nadir images Figure 4.3: Influence of nadir images in fitting process 4.3 OVERHANG ESTIMATION A list of visible wall and the images in which it is visible is made based on the test described in Sections and 3.4. Moreover, the walls with viewing angle (2D angle made by the wall with a line from the wall midpoint to the camera center) less than 10 were filtered out as images captured from such a small viewing angle suffer a lot from distortion. Starting at roof edge position obtained previously, a virtual vertical plane was sweeped at every 5cm. The two images of wall (assumed at vertical plane) were rectified using bilinear interpolation and correlation ratio/score was found using normalized correlation coefficient matching method. Matching threshold of 0.7 was used to decide whether the pixels are matching. If it is greater than the specified threshold, the pixels were accepted as matching pixels. The aperture size of 7 7 gave the best results. The threshold for minimum acceptable correlation score of 0.05 was chosen. Percentage change in maximum correlation score was used for peak test. A threshold of 0.05 on maximum correlation nearest point correlation this ratio is applied, i.e. maximum correlation > Overhangs of buildings were compared with the field survey measurements. From twenty five buildings used for comparison, overhangs of fifteen buildings were determined. The buildings whose both overhangs were not determined are removed from table 4.8. The undetermined overhang value is shown by an asterisk symbol (*) in the table. As viewing angle (α) of 10 is very small, rectified images were heavily distorted. Thus, estimated overhang values differed a lot from field measurement values (cf. table 4.8 and histogram 4.4). The experiment was repeated later with 15 threshold on viewing angle. The most inaccurate overhangs which were obtained in previous run were no more obtained. As small angle between views (β) results to poor epipolar geometry, some buildings had still inaccurate overhangs. The above experiments were repeated with 15 theshold on angle between views of the image pairs. The inaccurate results that were obtained from image pairs with poor epipolar geometry were either corrected by other image pairs or were not obtained. The obtained values are within approximately 10cm with the field data (cf. table 4.8 and histogram 4.4). Roof overhangs of only few buildings were obtained due to absence of sufficient images from good perspectives. Main overhang of the building G23 was still inaccurate and was due to insufficient texture on the wall. 32

44 Table 4.8: Comparison of overhang with field survey measurement building code 0 threshold on angle between views 15 threshold on angle between views field measurement 10 viewing angle threshold 15 viewing angle threshold 10 viewing angle threshold 15 viewing angle threshold main (cm) side (cm) main (cm) side (cm) main (cm) side (cm) main (cm) side (cm) main (cm) side (cm) G G * * * * G * * * 40 * * * G * 40 * 30 * * * G * * 35 G * 30 * * * * * G * 10 * * * * * G * * * 20 * * * G * * * 55 * * G * 30 * 30 * 30 * 30 G * * G * * H * * * 40 * * * H * * * H * * * 33

45 number of building cm cm 21 cm and beyond Not found 2 0 main side main side main side main side 10 viewing angle 15 viewing angle 10 viewing angle 15 viewing angle 0 angle between views 15 angle between views Figure 4.4: Comparison of overhang with field survey measurement The overhang value estimated by image pair having maximum correlation score among its counterpart was accepted as roof overhang. The differences between accepted overhang and the ones estimated from remaining eligible image pairs were used as internal quality measure. (a) rectified image 1 (b) rectified image 2 (c) maximum matching score (d) correlation score plot Figure 4.5: Two rectified images from wall façade of a building at maximum correlation score (a, b). The correlation score (c) of the two images. Correlation is plotted against overhang for the façade (d). 34

46 Figures 4.5a and 4.5b show the two rectified images of a building wall at estimated depth. Correlation score between the images are shown in figure 4.5c. The graph in figure 4.5d demonstrates the trend of correlation against overhang. It is clear from the graph that the correlation between the images is lower at wrong depth of the wall. At correct overhang i.e. images taken from correct location of the wall results in a clear peak because the pixels in the two images are from the same portion of the wall and thus leads to high correlation score. (a) front wall (b) side wall Figure 4.6: Visual verification of roof overhang estimation. Wire frame edges (in green) are projected to images after overhang estimation by plane sweeping. After the overhangs from both sides were estimated, the complete wire frames were projected to the images for visual verification. As shown in figure 4.6, the vertical wire frame edges are fitting well with the edges of the walls. The overhang from side and front directions are seen in figures 4.6a and 4.6b respectively. Many buildings in the area had insufficient texture on the sides. At the same time, majority of them were occluded from near by buildings. Therefore, side overhang of only few buildings were obtained. A wall façade having insufficient texture is depicted in figure 4.7a and a wall of a building occluded from its neighboring building is shown in figure 4.7b. Walls of one building were occluded by vegetation from all sides. (a) a wall façade with insufficient texture (b) a wall occluded from neighboring building Figure 4.7: Failure cases of overhang estimation by plane sweeping 35

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION Ruijin Ma Department Of Civil Engineering Technology SUNY-Alfred Alfred, NY 14802 mar@alfredstate.edu ABSTRACT Building model reconstruction has been

More information

Unwrapping of Urban Surface Models

Unwrapping of Urban Surface Models Unwrapping of Urban Surface Models Generation of virtual city models using laser altimetry and 2D GIS Abstract In this paper we present an approach for the geometric reconstruction of urban areas. It is

More information

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING Shi Pu International Institute for Geo-information Science and Earth Observation (ITC), Hengelosestraat 99, P.O. Box 6, 7500 AA Enschede, The

More information

Cell Decomposition for Building Model Generation at Different Scales

Cell Decomposition for Building Model Generation at Different Scales Cell Decomposition for Building Model Generation at Different Scales Norbert Haala, Susanne Becker, Martin Kada Institute for Photogrammetry Universität Stuttgart Germany forename.lastname@ifp.uni-stuttgart.de

More information

AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS

AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS Jihye Park a, Impyeong Lee a, *, Yunsoo Choi a, Young Jin Lee b a Dept. of Geoinformatics, The University of Seoul, 90

More information

Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method

Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method Jiann-Yeou RAU, Liang-Chien CHEN Tel: 886-3-4227151 Ext. 7651,7627,7622 Fax: 886-3-4255535 {jyrau, lcchen} @csrsr.ncu.edu.tw

More information

AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES

AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES Yandong Wang Pictometry International Corp. Suite A, 100 Town Centre Dr., Rochester, NY14623, the United States yandong.wang@pictometry.com

More information

AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING

AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING Shi Pu and George Vosselman International Institute for Geo-information Science and Earth Observation (ITC) spu@itc.nl, vosselman@itc.nl

More information

3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS

3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS 3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS Ellen Schwalbe Institute of Photogrammetry and Remote Sensing Dresden University

More information

CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES

CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES Norbert Haala, Susanne Becker, Martin Kada Institute for Photogrammetry, Universitaet Stuttgart Geschwister-Scholl-Str. 24D,

More information

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA Changjae Kim a, Ayman Habib a, *, Yu-Chuan Chang a a Geomatics Engineering, University of Calgary, Canada - habib@geomatics.ucalgary.ca,

More information

Advanced point cloud processing

Advanced point cloud processing Advanced point cloud processing George Vosselman ITC Enschede, the Netherlands INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Laser scanning platforms Airborne systems mounted

More information

International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998

International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 RAPID ACQUISITION OF VIRTUAL REALITY CITY MODELS FROM MULTIPLE DATA SOURCES Claus Brenner and Norbert Haala

More information

Object-oriented Model based 3D Building Extraction using Airborne Laser Scanning Points and Aerial Imagery

Object-oriented Model based 3D Building Extraction using Airborne Laser Scanning Points and Aerial Imagery Object-oriented Model based 3D Building Extraction using Airborne Laser Scanning Points and Aerial Imagery Wang Langyue March, 2007 Object-oriented Model based 3D Building Extraction using Airborne Laser

More information

A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO

A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO Yan Li a, Tadashi Sasagawa b, Peng Gong a,c a International Institute for Earth System Science, Nanjing University,

More information

STATUS OF AIRBORNE OBLIQUE IMAGING EUROSDR COMMISSION I PROJECT OBLIQUE IMAGERY. Markus Gerke May 15, 2014

STATUS OF AIRBORNE OBLIQUE IMAGING EUROSDR COMMISSION I PROJECT OBLIQUE IMAGERY. Markus Gerke May 15, 2014 STATUS OF AIRBORNE OBLIQUE IMAGING EUROSDR COMMISSION I PROJECT OBLIQUE IMAGERY Markus Gerke May 15, 2014 THE FIRST AIRBORNE PHOTOS WERE OBLIQUE First recorded aerial photograph in the US (Boston), by

More information

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA Abdullatif Alharthy, James Bethel School of Civil Engineering, Purdue University, 1284 Civil Engineering Building, West Lafayette, IN 47907

More information

INTEGRATION OF AUTOMATIC PROCESSES INTO SEMI-AUTOMATIC BUILDING EXTRACTION

INTEGRATION OF AUTOMATIC PROCESSES INTO SEMI-AUTOMATIC BUILDING EXTRACTION INTEGRATION OF AUTOMATIC PROCESSES INTO SEMI-AUTOMATIC BUILDING EXTRACTION Eberhard Gülch, Hardo Müller, Thomas Läbe Institute of Photogrammetry University Bonn Nussallee 15, D-53115 Bonn, Germany Ph.:

More information

GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES

GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES Norbert Haala, Martin Kada, Susanne Becker, Jan Böhm, Yahya Alshawabkeh University of Stuttgart, Institute for Photogrammetry, Germany Forename.Lastname@ifp.uni-stuttgart.de

More information

FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS

FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS Claus Brenner and Norbert Haala Institute for Photogrammetry (ifp) University of Stuttgart Geschwister-Scholl-Straße 24, 70174 Stuttgart, Germany Ph.: +49-711-121-4097,

More information

Model Selection for Automated Architectural Reconstruction from Multiple Views

Model Selection for Automated Architectural Reconstruction from Multiple Views Model Selection for Automated Architectural Reconstruction from Multiple Views Tomáš Werner, Andrew Zisserman Visual Geometry Group, University of Oxford Abstract We describe progress in automatically

More information

AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING

AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING Yuxiang He*, Chunsun Zhang, Mohammad Awrangjeb, Clive S. Fraser Cooperative Research Centre for Spatial Information,

More information

City-Modeling. Detecting and Reconstructing Buildings from Aerial Images and LIDAR Data

City-Modeling. Detecting and Reconstructing Buildings from Aerial Images and LIDAR Data City-Modeling Detecting and Reconstructing Buildings from Aerial Images and LIDAR Data Department of Photogrammetrie Institute for Geodesy and Geoinformation Bonn 300000 inhabitants At river Rhine University

More information

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A. Mahphood, H. Arefi *, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran,

More information

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission

More information

AUTOMATIC RECONSTRUCTION OF BUILDING ROOFS THROUGH EFFECTIVE INTEGRATION OF LIDAR AND MULTISPECTRAL IMAGERY

AUTOMATIC RECONSTRUCTION OF BUILDING ROOFS THROUGH EFFECTIVE INTEGRATION OF LIDAR AND MULTISPECTRAL IMAGERY AUTOMATIC RECONSTRUCTION OF BUILDING ROOFS THROUGH EFFECTIVE INTEGRATION OF LIDAR AND MULTISPECTRAL IMAGERY Mohammad Awrangjeb, Chunsun Zhang and Clive S. Fraser Cooperative Research Centre for Spatial

More information

NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN

NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN OVERVIEW National point clouds Airborne laser scanning in the Netherlands Quality control Developments in lidar

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction

Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction Andrew McClune, Pauline Miller, Jon Mills Newcastle University David Holland Ordnance Survey Background

More information

AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION

AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION AUTOMATIC RECONSTRUCTION OF LARGE-SCALE VIRTUAL ENVIRONMENT FOR INTELLIGENT TRANSPORTATION SYSTEMS SIMULATION Khairil Azmi, Shintaro Ono, Masataka Kagesawa, Katsushi Ikeuchi Institute of Industrial Science,

More information

The ISPRS Benchmark on Urban Object Classification and 3D Building Reconstruction

The ISPRS Benchmark on Urban Object Classification and 3D Building Reconstruction Franz Rottensteiner XXII ISPRS Congress Melbourne 2012 26 August 2012 The ISPRS Benchmark on Urban Object Classification and 3D Building Reconstruction Franz Rottensteiner a, Gunho Sohn b, Jaewook Jung

More information

3D Topography acquisition Literature study and PhD proposal

3D Topography acquisition Literature study and PhD proposal 3D Topography acquisition Literature study and PhD proposal Sander Oude Elberink December 2005 RGI 3D Topo DP 1-4 Status: definitive i Table of contents 1. Introduction...1 1.1. Background...1 1.2. Goal...1

More information

AUTOMATIC DETECTION OF BUILDINGS WITH RECTANGULAR FLAT ROOFS FROM MULTI-VIEW OBLIQUE IMAGERY

AUTOMATIC DETECTION OF BUILDINGS WITH RECTANGULAR FLAT ROOFS FROM MULTI-VIEW OBLIQUE IMAGERY AUTOMATIC DETECTION OF BUILDINGS WITH RECTANGULAR FLAT ROOFS FROM MULTI-VIEW OBLIQUE IMAGERY J. Xiao *, M. Gerke, G. Vosselman Faculty of Geo-Information Science and Earth Observation, University of Twente,

More information

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #9 Multi-Camera Geometry Prof. Dan Huttenlocher Fall 2003 Pinhole Camera Geometric model of camera projection Image plane I, which rays intersect Camera center C, through which all rays pass

More information

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY Jacobsen, K. University of Hannover, Institute of Photogrammetry and Geoinformation, Nienburger Str.1, D30167 Hannover phone +49

More information

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry Submitted to GIM International FEATURE A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry Dieter Fritsch 1, Jens Kremer 2, Albrecht Grimm 2, Mathias Rothermel 1

More information

REFINEMENT OF BUILDING FASSADES BY INTEGRATED PROCESSING OF LIDAR AND IMAGE DATA

REFINEMENT OF BUILDING FASSADES BY INTEGRATED PROCESSING OF LIDAR AND IMAGE DATA In: Stilla U et al (Eds) PIA07. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 36 (3/W49A) REFINEMENT OF BUILDING FASSADES BY INTEGRATED PROCESSING OF LIDAR

More information

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Pankaj Kumar 1*, Alias Abdul Rahman 1 and Gurcan Buyuksalih 2 ¹Department of Geoinformation Universiti

More information

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 24 Solid Modelling Welcome to the lectures on computer graphics. We have

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Photogrammetric mapping: introduction, applications, and tools GNSS/INS-assisted photogrammetric and LiDAR mapping LiDAR mapping: principles, applications, mathematical model, and

More information

Automated Processing for 3D Mosaic Generation, a Change of Paradigm

Automated Processing for 3D Mosaic Generation, a Change of Paradigm Automated Processing for 3D Mosaic Generation, a Change of Paradigm Frank BIGNONE, Japan Key Words: 3D Urban Model, Street Imagery, Oblique imagery, Mobile Mapping System, Parallel processing, Digital

More information

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Wei Guan Suya You Guan Pang Computer Science Department University of Southern California, Los Angeles, USA Abstract In this paper, we present

More information

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS Y. Postolov, A. Krupnik, K. McIntosh Department of Civil Engineering, Technion Israel Institute of Technology, Haifa,

More information

MODELLING 3D OBJECTS USING WEAK CSG PRIMITIVES

MODELLING 3D OBJECTS USING WEAK CSG PRIMITIVES MODELLING 3D OBJECTS USING WEAK CSG PRIMITIVES Claus Brenner Institute of Cartography and Geoinformatics, University of Hannover, Germany claus.brenner@ikg.uni-hannover.de KEY WORDS: LIDAR, Urban, Extraction,

More information

BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS INTRODUCTION

BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS INTRODUCTION BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS I-Chieh Lee 1, Shaojun He 1, Po-Lun Lai 2, Alper Yilmaz 2 1 Mapping and GIS Laboratory 2 Photogrammetric Computer Vision Laboratory Dept. of Civil

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY

AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY T. Partovi a *, H. Arefi a,b, T. Krauß a, P. Reinartz a a German Aerospace Center (DLR), Remote Sensing Technology Institute,

More information

AN INTERACTIVE SCHEME FOR BUILDING MODELING USING THE SPLIT-MERGE-SHAPE ALGORITHM

AN INTERACTIVE SCHEME FOR BUILDING MODELING USING THE SPLIT-MERGE-SHAPE ALGORITHM AN INTERACTIVE SCHEME FOR BUILDING MODELING USING THE SPLIT-MERGE-SHAPE ALGORITHM Jiann-Yeou Rau a,*, Liang-Chien Chen b, Guam-Hua Wang c a Associate Research Engineer, b Professor, c Student CSRSR, National

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

DENSE MATCHING IN HIGH RESOLUTION OBLIQUE AIRBORNE IMAGES

DENSE MATCHING IN HIGH RESOLUTION OBLIQUE AIRBORNE IMAGES In: Stilla U, Rottensteiner F, Paparoditis N (Eds) CMRT09. IAPRS, Vol. XXXVIII, Part 3/W4 --- Paris, France, 3-4 September, 2009 DENSE MATCHING IN HIGH RESOLUTION OBLIQUE AIRBORNE IMAGES M. Gerke International

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Texturing Techniques in 3D City Modeling

Texturing Techniques in 3D City Modeling Texturing Techniques in 3D City Modeling 1 İdris Kahraman, 2 İsmail Rakıp Karaş, Faculty of Engineering, Department of Computer Engineering, Karabuk University, Turkey 1 idriskahraman@karabuk.edu.tr, 2

More information

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D TRAINING MATERIAL WITH CORRELATOR3D Page2 Contents 1. UNDERSTANDING INPUT DATA REQUIREMENTS... 4 1.1 What is Aerial Triangulation?... 4 1.2 Recommended Flight Configuration... 4 1.3 Data Requirements for

More information

OBJECT-ORIENTED MEASUREMENT OF PIPE SYSTEMS USING EDGE MATCHING AND CSG MODELS WITH CONSTRAINTS

OBJECT-ORIENTED MEASUREMENT OF PIPE SYSTEMS USING EDGE MATCHING AND CSG MODELS WITH CONSTRAINTS OBJECT-ORIENTED MEASUREMENT OF PIPE SYSTEMS USING EDGE MATCHING AND CSG MODELS WITH CONSTRAINTS Johan W.H. Tangelder, George Vosselman, Frank A. van den Heuvel Delft University of Technology, The Netherlands

More information

BUILDING EXTRACTION AND RECONSTRUCTION FROM LIDAR DATA. Zheng Wang. EarthData International Gaithersburg, Maryland USA

BUILDING EXTRACTION AND RECONSTRUCTION FROM LIDAR DATA. Zheng Wang. EarthData International Gaithersburg, Maryland USA BUILDING EXTRACTION AND RECONSTRUCTION FROM LIDAR DATA Zheng Wang EarthData International Gaithersburg, Maryland USA zwang@earthdata.com Tony Schenk Department of Civil Engineering The Ohio State University

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

Automatic Building Extrusion from a TIN model Using LiDAR and Ordnance Survey Landline Data

Automatic Building Extrusion from a TIN model Using LiDAR and Ordnance Survey Landline Data Automatic Building Extrusion from a TIN model Using LiDAR and Ordnance Survey Landline Data Rebecca O.C. Tse, Maciej Dakowicz, Christopher Gold and Dave Kidner University of Glamorgan, Treforest, Mid Glamorgan,

More information

Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgar

Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgar Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgart Geschwister-Scholl-Strae 24, 70174 Stuttgart, Germany

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz Universität Hannover

Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz Universität Hannover 12th AGILE International Conference on Geographic Information Science 2009 page 1 of 5 Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz

More information

The raycloud A Vision Beyond the Point Cloud

The raycloud A Vision Beyond the Point Cloud The raycloud A Vision Beyond the Point Cloud Christoph STRECHA, Switzerland Key words: Photogrammetry, Aerial triangulation, Multi-view stereo, 3D vectorisation, Bundle Block Adjustment SUMMARY Measuring

More information

A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS

A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS Robert Pâquet School of Engineering, University of Newcastle Callaghan, NSW 238, Australia (rpaquet@mail.newcastle.edu.au)

More information

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Evangelos MALTEZOS, Charalabos IOANNIDIS, Anastasios DOULAMIS and Nikolaos DOULAMIS Laboratory of Photogrammetry, School of Rural

More information

Multi-view stereo. Many slides adapted from S. Seitz

Multi-view stereo. Many slides adapted from S. Seitz Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window

More information

AUTOMATIC PHOTO ORIENTATION VIA MATCHING WITH CONTROL PATCHES

AUTOMATIC PHOTO ORIENTATION VIA MATCHING WITH CONTROL PATCHES AUTOMATIC PHOTO ORIENTATION VIA MATCHING WITH CONTROL PATCHES J. J. Jaw a *, Y. S. Wu b Dept. of Civil Engineering, National Taiwan University, Taipei,10617, Taiwan, ROC a jejaw@ce.ntu.edu.tw b r90521128@ms90.ntu.edu.tw

More information

CURRENT DEVELOPMENTS IN AIRBORNE OBLIQUE IMAGING SYSTEMS AND AUTOMATED INTERPRETATION OF COMPLEX BUILDINGS. Markus Gerke

CURRENT DEVELOPMENTS IN AIRBORNE OBLIQUE IMAGING SYSTEMS AND AUTOMATED INTERPRETATION OF COMPLEX BUILDINGS. Markus Gerke CURRENT DEVELOPMENTS IN AIRBORNE OBLIQUE IMAGING SYSTEMS AND AUTOMATED INTERPRETATION OF COMPLEX BUILDINGS Markus Gerke THE FIRST AIRBORNE PHOTOS WERE OBLIQUE First recorded aerial photograph in the US

More information

Automatic generation of 3-d building models from multiple bounded polygons

Automatic generation of 3-d building models from multiple bounded polygons icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) Automatic generation of 3-d building models from multiple

More information

THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA

THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA Sander Oude Elberink* and Hans-Gerd Maas** *Faculty of Civil Engineering and Geosciences Department of

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

Automatic image network design leading to optimal image-based 3D models

Automatic image network design leading to optimal image-based 3D models Automatic image network design leading to optimal image-based 3D models Enabling laymen to capture high quality 3D models of Cultural Heritage Bashar Alsadik & Markus Gerke, ITC, University of Twente,

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

FOOTPRINTS EXTRACTION

FOOTPRINTS EXTRACTION Building Footprints Extraction of Dense Residential Areas from LiDAR data KyoHyouk Kim and Jie Shan Purdue University School of Civil Engineering 550 Stadium Mall Drive West Lafayette, IN 47907, USA {kim458,

More information

Building Roof Contours Extraction from Aerial Imagery Based On Snakes and Dynamic Programming

Building Roof Contours Extraction from Aerial Imagery Based On Snakes and Dynamic Programming Building Roof Contours Extraction from Aerial Imagery Based On Snakes and Dynamic Programming Antonio Juliano FAZAN and Aluir Porfírio Dal POZ, Brazil Keywords: Snakes, Dynamic Programming, Building Extraction,

More information

INCORPORATING SCENE CONSTRAINTS INTO THE TRIANGULATION OF AIRBORNE OBLIQUE IMAGES

INCORPORATING SCENE CONSTRAINTS INTO THE TRIANGULATION OF AIRBORNE OBLIQUE IMAGES INCORPORATING SCENE CONSTRAINTS INTO THE TRIANGULATION OF AIRBORNE OBLIQUE IMAGES M. Gerke and A.P. Nyaruhuma International Institute for Geo-Information Science and Earth Observation ITC, Department of

More information

INDOOR 3D MODEL RECONSTRUCTION TO SUPPORT DISASTER MANAGEMENT IN LARGE BUILDINGS Project Abbreviated Title: SIMs3D (Smart Indoor Models in 3D)

INDOOR 3D MODEL RECONSTRUCTION TO SUPPORT DISASTER MANAGEMENT IN LARGE BUILDINGS Project Abbreviated Title: SIMs3D (Smart Indoor Models in 3D) INDOOR 3D MODEL RECONSTRUCTION TO SUPPORT DISASTER MANAGEMENT IN LARGE BUILDINGS Project Abbreviated Title: SIMs3D (Smart Indoor Models in 3D) PhD Research Proposal 2015-2016 Promoter: Prof. Dr. Ir. George

More information

CS 231A Computer Vision (Winter 2014) Problem Set 3

CS 231A Computer Vision (Winter 2014) Problem Set 3 CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a 96 Chapter 7 Model-Based Stereo 7.1 Motivation The modeling system described in Chapter 5 allows the user to create a basic model of a scene, but in general the scene will have additional geometric detail

More information

The use of different data sets in 3-D modelling

The use of different data sets in 3-D modelling The use of different data sets in 3-D modelling Ahmed M. HAMRUNI June, 2014 Presentation outlines Introduction Aims and objectives Test site and data Technology: Pictometry and UltraCamD Results and analysis

More information

Using Perspective Rays and Symmetry to Model Duality

Using Perspective Rays and Symmetry to Model Duality Using Perspective Rays and Symmetry to Model Duality Alex Wang Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2016-13 http://www.eecs.berkeley.edu/pubs/techrpts/2016/eecs-2016-13.html

More information

Extraction of façades with window information from oblique view airborne laser scanning point clouds

Extraction of façades with window information from oblique view airborne laser scanning point clouds Extraction of façades with window information from oblique view airborne laser scanning point clouds Sebastian Tuttas, Uwe Stilla Photogrammetry and Remote Sensing, Technische Universität München, 80290

More information

(Refer Slide Time: 00:02:00)

(Refer Slide Time: 00:02:00) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 18 Polyfill - Scan Conversion of a Polygon Today we will discuss the concepts

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

CS 130 Final. Fall 2015

CS 130 Final. Fall 2015 CS 130 Final Fall 2015 Name Student ID Signature You may not ask any questions during the test. If you believe that there is something wrong with a question, write down what you think the question is trying

More information

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison

CHAPTER 9. Classification Scheme Using Modified Photometric. Stereo and 2D Spectra Comparison CHAPTER 9 Classification Scheme Using Modified Photometric Stereo and 2D Spectra Comparison 9.1. Introduction In Chapter 8, even we combine more feature spaces and more feature generators, we note that

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

Camera Drones Lecture 3 3D data generation

Camera Drones Lecture 3 3D data generation Camera Drones Lecture 3 3D data generation Ass.Prof. Friedrich Fraundorfer WS 2017 Outline SfM introduction SfM concept Feature matching Camera pose estimation Bundle adjustment Dense matching Data products

More information

Keywords: 3D-GIS, R-Tree, Progressive Data Transfer.

Keywords: 3D-GIS, R-Tree, Progressive Data Transfer. 3D Cadastres 3D Data Model Visualisation 3D-GIS IN NETWORKING ENVIRONMENTS VOLKER COORS Fraunhofer Institute for Computer Graphics Germany ABSTRACT In this paper, we present a data model for 3D geometry

More information

Aalborg Universitet. Published in: Accuracy Publication date: Document Version Early version, also known as pre-print

Aalborg Universitet. Published in: Accuracy Publication date: Document Version Early version, also known as pre-print Aalborg Universitet A method for checking the planimetric accuracy of Digital Elevation Models derived by Airborne Laser Scanning Høhle, Joachim; Øster Pedersen, Christian Published in: Accuracy 2010 Publication

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Oblique aerial imagery in the praxis: applications and challenges

Oblique aerial imagery in the praxis: applications and challenges ISPRS / EuroSDR Workshop on Oblique aerial cameras sensors and data processing Barcelona, 10 October 2017 Oblique aerial imagery in the praxis: applications and challenges Daniela Poli, Kjersti Moe, Klaus

More information

GRAMMAR SUPPORTED FACADE RECONSTRUCTION FROM MOBILE LIDAR MAPPING

GRAMMAR SUPPORTED FACADE RECONSTRUCTION FROM MOBILE LIDAR MAPPING GRAMMAR SUPPORTED FACADE RECONSTRUCTION FROM MOBILE LIDAR MAPPING Susanne Becker, Norbert Haala Institute for Photogrammetry, University of Stuttgart Geschwister-Scholl-Straße 24D, D-70174 Stuttgart forename.lastname@ifp.uni-stuttgart.de

More information

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements

Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements Planetary Rover Absolute Localization by Combining Visual Odometry with Orbital Image Measurements M. Lourakis and E. Hourdakis Institute of Computer Science Foundation for Research and Technology Hellas

More information

PERFORMANCE EVALUATION OF A SYSTEM FOR SEMI-AUTOMATIC BUILDING EXTRACTION USING ADAPTABLE PRIMITIVES

PERFORMANCE EVALUATION OF A SYSTEM FOR SEMI-AUTOMATIC BUILDING EXTRACTION USING ADAPTABLE PRIMITIVES PERFORMANCE EVALUATION OF A SYSTEM FOR SEMI-AUTOMATIC BUILDING EXTRACTION USING ADAPTABLE PRIMITIVES F. Rottensteiner a, M. Schulze b a Institute of Photogrammetry and Remote Sensing, Vienna University

More information