Optics and Lasers in Engineering

Size: px
Start display at page:

Download "Optics and Lasers in Engineering"

Transcription

1 Optics and Lasers in Engineering 51 (2013) Contents lists available at SciVerse ScienceDirect Optics and Lasers in Engineering journal homepage: Integration of LiDAR data and optical multi-view images for 3D reconstruction of building roofs Liang Cheng a, Lihua Tong a, Yanming Chen a, Wen Zhang a, Jie Shan b,c, Yongxue Liu a, Manchun Li a,n a Department of Geographical Information Science, Nanjing University, Nanjing , China b School of Remote Sensing and Information Engineering, Wuhan University, Wuhan , China c School of Civil Engineering, Purdue University, USA article info Article history: Received 1 May 2012 Received in revised form 26 September 2012 Accepted 20 October 2012 Available online 22 November 2012 Keywords: LiDAR data Multi-view imagery Building roof 3D reconstruction abstract An approach by integrating airborne LiDAR data and optical multi-view aerial imagery is presented for automatic reconstruction of 3D building roof models. It includes two main steps: roof point segmentation and 3D roof model reconstruction. A coarse-to-fine LiDAR data segmentation is proposed to separate LiDAR points of a building into a set of roof planar segments, which includes initial segmentation by using point normal estimate, and segmentation refinement by using a new Shrink- Expand technique. A point-based integration mechanism by incorporating the segmented roof points and 2D lines extracted from optical multi-view aerial images is then proposed for 3D step line determination, with which 3D roof models are reconstructed. The experimental results indicate that the proposed approach can provide high-quality 3D roof models with diverse roof structure complexities. & 2012 Elsevier Ltd. All rights reserved. 1. Introduction The combination of Light Detection And Ranging (LiDAR) data and optical images has been increasingly focused for many engineering applications, such as 3D building reconstruction. Optical stereo image technology has been used for three-dimensional compilation of urban objects for many years. Although delivering accurate results, its major disadvantage is that the automation level of this optical stereo-based process [1]. It should be noted that building that has only planar roofs is a quite common assumption. Airborne LiDAR technology is able to provide data of directly measured three-dimensional points, which has been an important way for the derivation of 3D building models. However, compared with optical aerial stereo images with a spatial resolution of a centimeter level, airborne LiDAR technology currently only has the ability to collect 3D points with a spatial resolution of 1- m level [2]. Therefore, in contrast with optical products, it is hard to reach the same level in geometric accuracy and details for the building roof models derived from LiDAR data, due to these models are restricted by the ground resolution of LiDAR data and its process algorithms. The goal of this study is to synthetically utilize the complementary data characteristics of LiDAR data and optical aerial images for the derivation of more reliable and accurate 3D building roof models with a high automation level. Among most of the current related studies, the objectives of data integration involve building boundary refinement, building extraction n Corresponding author. address: limanchun_nju@126.com (M. Li). improvement, texture modeling, and model validation. However, few approaches focus on improving reliability and geometric accuracy of the reconstructed 3D roof structures. Using LiDAR data alone is still the main solution for the most of studies in 3D roof reconstruction. Their common idea is to develop methods for LiDAR data analysis to segment LiDAR roof points and then to reconstruct 3D roof planes from these segmented points. Optical multi-view aerial images with available complementary information (e.g., very high spatial resolution, highly rich texture information, multi-view observations) are still not used enough to assist this 3D roof reconstruction processes. Once used appropriately, information extracted from optical multiview aerial images can help to increase the accuracy and reliability of roof point segmentation. Furthermore, the 3D reconstruction quality of building roofs, especially step-style roof structures, can be effectively promoted, which is significantly influenced by irregular point spacing and the performance of the commonly used boundary regularization algorithms in only LiDAR-based approach. Therefore, for automatic reconstruction of 3D roof models with reliable structures and accurate geometric position, a new approach by integrating airborne LiDAR data and optical multi-view aerial imagery is proposed, which consists of two main steps: roof point segmentation and 3D roof model reconstruction. 2. Related work The focus of combining LiDAR data and imagery for 3D building reconstruction has been raised in recent years. To provide a better representation of surface discontinuities, Mcintosh and Krupnik [3] /$ - see front matter & 2012 Elsevier Ltd. All rights reserved.

2 494 L. Cheng et al. / Optics and Lasers in Engineering 51 (2013) produced the digital surface model from LiDAR data and then merged edges detected and matched in this aerial images and digital surface model. In the approach proposed by Ma [4], roof planes of a building were derived based on a clustering algorithm, and the refinement of building boundaries was performed by using aerial stereo imagery. Sohn and Dowman [5] collected rectilinear lines around building boundaries, and then obtained a full description of building boundaries by merging polygons. Cheng et al. [2] proposed an approach by integrating aerial imagery and LiDAR data to reconstruct 3D building models. In this approach, an algorithm for determination of principal orientations of a building was introduced and 3D boundary segments were then determined by incorporating LiDAR data and the 2D segments extracted from images, a strategy including automatic recovery of lost boundaries was finally used for 3D building model reconstruction. The focus of this study is to improve the quality of building boundaries, not building roofs. This study has also given a detailed review on techniques of integrating LiDAR data and images for 3D building reconstruction. Some other related approaches were introduced by [6 15]. As discussed in previous section, according to the data integration objective, these reported studies can be classified into four main types: the data integration to refine building boundary especially by using information derived from very high resolution images, the data integration to improve results of building segmentation and extraction by using information from both two datasets, the data integration to implement photo-realistic 3D modeling by using image for texture mapping, and the data integration for modeling checking by cross-validation of two datasets. However, few approaches focus on how to advance the quality of 3D roof models, especially in the reliability and geometric accuracy of detailed 3D roof structures, by integrating LiDAR data and images. Using LiDAR data alone has almost attracted most of work in 3D roof reconstruction. Dorninger and Pfeifer [16] determined planar regions by hierarchical clustering and performed the determination of regularized building outlines from point clouds, and then reconstructed a building model based on the assumption that the model shall be the best approximation of the given point cloud. Sohn et al. [17] used a Binary Space Partitioning (BSP) tree for reconstructing polyhedral building models from airborne LiDAR data, in which the core is to globally reconstruct geometric topology between adjacent linear features by adopting a BSP tree. In the solution framework proposed by Sampath and Shan [18], the surface normals of all planar points were clustered with the fuzzy k-means method and a potentialbased approach was then used to optimize this clustering process; an adjacency matrix was afterward formed to express the relationship of the segmented roof planes for building roof reconstruction and an extended boundary regularization approach was finally developed to achieve topologically consistent and geometrically correct building models. Kim and Shan [19] presented a level set-based approach for building roof modeling from airborne laser scanning data, in which roof segmentation was performed by minimizing an energy function formulated as multiphase level set and 3D roof models were reconstructed based on roof structure points derived by intersecting adjacent roof segments or line segments of building boundary. Some other approaches associated with the 3D building reconstruction using LiDAR data were also presented [20 22]. 3. Integration of LiDAR data and optical images for roof point segmentation Focusing LiDAR points belonging to one building, the study of this section is to segment these LiDAR points into a set of segments which uniquely represent roof planar surfaces. A coarse-to-fine segmentation is performed, including initial roof segmentation and roof segmentation refinement. The initial segmentation part provides coarse but reliable segmentation results from LiDAR data analysis by using point normal estimate. The refinement part integrates LiDAR data and aerial images to obtain more reliable and accurate point clusters by using a new Shrink-Expand technique. The basic idea behind the sequence of coarse to fine comes from the specific data characteristic of these two datasets. In the Fig. 1. Initial roof segmentation. (a) Triangle network construction, (b) The detected edges by adjacent triangle analysis, (c) Point normal estimate based on local triangle clusters, (d) The detected edges after point normal analysis, (e) Wall line removal, (f) Short line removal (marked by circles), and (g) The initial roof planar segments.

3 L. Cheng et al. / Optics and Lasers in Engineering 51 (2013) first step, LiDAR data, which has the characteristics with directly measured height values, are used for average normal analysis of points to provide a coarse but relatively reliable segmentation results. In this step, only LiDAR data is used without aerial images, because the reliability of segmentation may be reduced and complexity of computation may be enlarged, if the aerial images with very high spatial resolution and highly complex texture information are used inappropriately. With support of the initially segmented results, both LiDAR data and aerial images are used for providing more accurate segmentation, by deeply excavating the rich information of aerial images Initial roof point segmentation from LiDAR data A Delaunay triangle network is generated from the LiDAR points belong to one building, shown in Fig. 1(a). If angle value between two adjacent triangles is larger than the threshold (451 in this study), the connected edge between two triangles is labeled, which may be a ridge line or a step line. Fig. 1(b) shows the labeled edges by checking all triangle pairs in this triangle network. Due to the inherent data noise, this process may always lead to overdetection results, which means some lines in this triangle network are wrongly considered as the ridge lines or step lines. To reduce the influences of LiDAR data noises, point normal estimate based on local triangle clusters is introduced for removing wrongly detected edges. For Point P i in Fig. 1(c), its neighboring triangles (T 1,T 2,T 3,T 4,T 5 ), which called its triangle cluster, can be found by analyzing their topological relationship. Based on all vertices of this triangle cluster, a local fitting RANSAC plane is estimated. The principle of this RANSAC algorithm is to find the best plane among these 3D vertices. Three points are randomly selected for calculating the parameters of the corresponding plane. Then all points of the vertices belonging to the calculated plane are detected, according to a given threshold. These procedures are then repeated N times. The obtained result in each iteration is compared with the last saved one. If the new result covers more points, then the saved result is replaced by the new one; otherwise, the iteration is stopped. The normal value of P i is then assigned by the computed normal value of this RANSAC plane. For the labeled edges in Fig. 1(b), two adjacent triangles associated to it are detected, and then two corresponding points P i, P j of this labeled edge are found. The angle difference between these two point normals is computed. If this angle difference is smaller than the threshold (201 in this study), this edge is unlabeled as a regular line (not a ridge line or step line). Fig. 1(d) shows the remained edges by the point normal analysis. Further processes include wall line removal, short line removal and suspended line removal. In Fig. 1(e), these wall lines are detected by analyzing two parameters. One is the height difference of two endpoints belonging to this line. Another is the angle value between this line and the horizontal plane. After removing wall lines, short lines (marked by circles in Fig. 1(f)), and suspended lines, the point clusters surrounding by remained edges represent the roof planar segments. Each roof planar segment is labeled in Fig. 1(g), respectively Segmentation refinement based on LiDAR data and optical images A Shrink-Expand technique is proposed for refining the above initial roof segmentation. The above section can lead to initial but relatively reliable results for roof patch segmentation, however, due to the inherent LiDAR data noises and the complexity of building roof structures, some problems may still exist during the initial segmentation process. Especially, near the edge areas of roof planes, a few wrong detections and/or missed detections could happen. For Instance, some points located in trees that are very close to buildings may be wrongly segmented into roof planar segments. In addition, some points belonging to a small roof structure of a building may be not kept as roof points during segmentation process. Therefore, to overcome these problems, a Shrink-Expand technique is proposed. The Shrink-Expand technique includes two steps: Shrink and Expand, in which the core is a suspicious point removal and reevaluation strategy. The basic idea of Shrink is suspicious point removal, which is to remove the suspicious points, even including a few correct points, near edge areas of roofs. The idea of Expand is candidate point re-evaluation in reliability, which is to find and accept the missed but correct points as many as possible based on the Shrink results. Therefore, the principle of Shrink-Expand process is to advance the reliability meanwhile prevent information loss. This process is applied to each roof plane one by one Shrink Step A: grid-based point index establishment. A grid with a size of three to five times average point spacing is built for covering the segmented roof patches. Fig. 2(a) illustrates one roof plane covered by this grid. This grid is also used to improve the index speed of searching adjacent points. Step B: edge area detection. For each roof planar segment, its edge areas are detected. Based on the relationship between points and this index grid, for all points in a grid, there are four possibilities: points labeled as non-building (non-building points are used here), points labeled by one roof cluster, points labeled by more than one roof clusters, and points labeled by non-building and roof clusters. If the member points of a grid are identified as the late two types, this grid is taken as an edge grid, shown as black grids in Fig. 2(a). Step C: points in edge grids removal. A buffer is established based on the edge grids (Fig. 2(b)). For one roof plane, all points belonging to its edge grids or their buffer grids are removed. Fig. 2(c) illustrates the roof planar segments after point removal Expand Step A: baseline selection. Based on the remained points in Fig. 2(c), a Delaunay triangular network is generated, shown in Fig. 2(d). A baseline, which is a boundary in this triangle network, can be found by analyzing the relationship between lines and triangles. If a line has both left and right triangles, it is not a baseline. Otherwise, it is a baseline (Fig. 2(e)). Step B: candidate point determination. For the subsequent Expand process, candidate points can be selected based on the selected baselines. Before that, the region for searching the candidate points needs to be determined. In Fig. 2(f), one baseline is selected and its two endpoints are shown as bright points. Based on the aforementioned grid-based point index, we can know which grids contain these two endpoints. Furthermore, their eight neighboring grids are taken as the searching region. The frame in Fig. 2(f) shows the region for searching candidate points based on this baseline by merging 16 grids. The candidate points are those points in the black frame and beyond the triangle network. Among all candidate points, the point which has the largest angle with the baseline is considered as the highest priority, labeled as Point d in Fig. 2(f). Step C: dynamic triangle propagation. Afterward, a new triangle is created with this point and this baseline. This point

4 496 L. Cheng et al. / Optics and Lasers in Engineering 51 (2013) Fig. 2. The Shrink-Expand process for roof segmentation refinement. (a) Grid-based point index establishment and edge grids of each roof patch detection, (b) Buffer zone of each grid generation, (c) Shrink process by removing suspicious points in buffer zone, (d) Triangle network construction based on the remained points, (e) Baseline selection (cyan lines), (f) Expand process by dynamic triangle propagation from a baseline to a candidate point, (g) The refined roof planar segments by Shrink-Expand process, (h) A comparison to illustrate the refined results (points in the box of left image are removed in the refinement results). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) can be checked whether it is reliable or not by a criterion. This criterion is introduced later in details. If the point cannot reach the criterion, delete it and find a new candidate point in the searching region to create a new triangle; otherwise, update the triangular network by adding the new triangle. The two edges of this accepted triangle are considered as new baselines. Step D: repeat Step B and Step C, triangle propagation stops until no new baselines can be found. The criterion for judging the reliability of a candidate point follows a sequence of three aspects: (a) slope, (b) spectral information, and (c) texture information. For a candidate point, slope changes between it and its neighbors are considered at first. If the change is not consistent with its adjacent regions, this candidate point cannot be accepted as a new member of the original triangle network. Similarly, spectral information and texture information are then considered orderly. If a candidate point can pass the following three checks, it is accepted as a reliable point and is added into the triangle network. Slope: Fig. 2(f) illustrates a new triangle created from a baseline (between Point a and b) with a candidate Point d. Another Point c lies at the left side of this baseline. We calculate the average normal of Point c using aforementioned method. The angle between normal vector of Point c and this new triangle is computed. If this angle value is smaller than the threshold (701), transitional zone from the original triangles to this new triangle is considered to be not smooth and the candidate point cannot be accepted. Otherwise, Point d passes the slope check. Spectral information: Much wrong segmentation happened due to trees are very close to buildings (Fig. 2(h)). Spectral information is used to deal with this problem. For Point d in Fig. 2(f), it is easy to find the corresponding place in an aerial image after the registration of LiDAR data and aerial images. Since the orientation parameters of the aerial images are known, LiDAR points would be directly projected onto the aerial images by means of the collinearity equation. The aerial imagery covers three bands (red, green, and blue). Two parameters are considered: the ratio of blue band to red band and the sum of three band reflectance. If the ratio is larger than the threshold (1.3 in this study) and the sum value is smaller than the threshold (210 in this study), the candidate point d is considered as non-building point and cannot be accepted. Otherwise, Point d passes the spectral check. Texture information: Entropy is used as a measurement to represent texture information. For Point d, its entropy value is compared with the value of its neighboring region that are

5 L. Cheng et al. / Optics and Lasers in Engineering 51 (2013) represented by Point c in Fig. 2(f). By taking Point d as a center, a rectangle window (with a size of 1.5 m 1.5 m, average point spacing, in this study) is created in the corresponding aerial image. Eq. 1 is used to convert a color image to a gray image. Eq. 2 is used to calculate the entropy of this window, which is taken as the entropy value of Point d. Similarly, the entropy value of Point c is computed. The difference entropy value between Point d and c is calculated. If the difference value is larger than the threshold value (3.5 in this study), this candidate point is rejected. Otherwise, Point d passes the entropy check. Gray ¼ Rn0:299þGn0:587þBn0:114 ð1þ H ¼ XM P ij X N i ¼ 1 j ¼ 1 P ij log 2 P ij refers to the probability of various gray values in the whole image. Fig. 2(g) illustrates the segmentation refinement results by the Shrink-Expand technique. Fig. 2(h) shows a detailed comparison to illustrate the refined results, in which the wrongly detected points (points located in trees in left image) are removed in the refinement results. 4. Integration of segmented roof points and optical images for 3D roof model reconstruction 3D roof models are reconstructed in this section by using the segmented roof points and 3D roof lines derived from optical images. 3D roof lines include 3D boundaries, 3D roof ridge lines, and 3D roof step lines. Among them, 3D boundaries are determined using the approach proposed by Cheng et al. [2]. The brief introduction is as follows. The principal orientations of a building are estimated. A dynamic selection strategy based on LiDAR point density analysis and K-means clustering is then used to identify ð2þ boundary segments from non-boundary segments. 3D boundary segments are finally determined by incorporating LiDAR data and the 2D segments extracted from multi-view imagery. 3D roof ridge lines can usually be derived by intersecting two adjacent planes, since all of roof planes are detected by the above segmentation process for each building. The novelty of this section is on 3D step line determination. A step line is produced where two roof planes are adjacent to each other with a large height discontinuity but without an intersected line. The step lines are commonly extracted by regularizing their boundaries [13]. The main limitation of this method is that it is hard to derive highly accurate step lines with good details only based on LiDAR data, due to the nature of discrete and irregular data acquisition and its relatively low spatial resolution. By addressed this problem, this section integrates the segmented roof points and aerial images to determine 3D step line with accurate geometric position and good details, thus benefiting to improve the quality of 3D roof reconstruction D roof step line extraction from optical images A building image is created at first using the method proposed by Cheng et al. [2], for reducing the complexity of line extraction in an aerial image with high resolution. Based on the LiDAR points belonging to a building, a bounding rectangle (BR) and a corresponding raster image of this building can be created. The background of Fig. 3(a) is the building image by cutting and filtering an aerial image using the BR and the raster image, respectively. Afterward, the Edison detector [23] is used as the edge detector and line segments are then extracted using Hough Transformation, shown in Fig. 3(b). The extracted line segments are automatically separated into two sets: step line segments and non-step line segments, by analyzing the relationship between the extracted line segments and the segmented roof points (Fig. 3(c)). Two rectangle boxes with a certain width are generated along two Fig. 3. 2D step line extraction. (a) A workspace image and 3D roof planar segments, (b) Line segment extraction from the image in (a), (c) Overlay of the extracted segments and the segmented points, (d) The left and right polygons belonging to a line segment, (e) The selected roof lines (red) and the boundaries (black). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

6 498 L. Cheng et al. / Optics and Lasers in Engineering 51 (2013) orthogonal directions of a line segment which is invoked by the method proposed by Sohn and Dowman [5]. Fig. 3(d) shows two kinds of rectangular boxes created for each segment. If points are found in both boxes, the line segment is taken as a candidate step lines, because the line segment surrounded by LiDAR points should locate on the roofs. Furthermore, the segment is taken as a step line segment, only when points belonging to left box are attributed with one roof cluster and points belonging to right box are attribute with another roof cluster, as illustrated in Fig. 3(d). Fig. 3(e) illustrates the extracted 2D step lines (gray) and the boundaries (black) determined by Cheng et al. [2] D step line determination from 2D lines and 3D points A point-based integration mechanism by combining 2D lines derived from optical images and 3D points derived from LiDAR data is proposed for 3D step line determination. It includes a sequence of transfers 2D line - 2D point - 3D point - 3D line. In first transfer 2D line to 2D point, 2D points are used to represent the extracted 2D lines (Fig. 4(b)) in an aerial image by simply taking two endpoints of this line. The task of the second transfer 2D point to 3D point is to determine a 3D point by a 2D point. The task of the third transfer 3D point to 3D line is to obtain a 3D line from 3D points. In the second transfer, a single point-based photogrammetric technique is introduced to determine a 3D point by a single 2D point. In Fig. 4(c), l is an extracted 2D line in an aerial image. Point p 0 and Point p 1 are the two endpoints of this line. For Point p 0,the nearest LiDAR point can be found by overlaying it with the LiDAR points projected into the aerial image by collinearity equation. The elevation value Z of this nearest LiDAR point is taken as the elevation value of Point p 0. The task of this second transfer is to determine the 3D point coordinate P 0 (X, Y, Z) in object space by using a single 2D point coordinate p 0 (x, y) in image space. Image a ray going through theexposurecenters of a camera and the Point p 0 in an image toward a plane with height value Z in object space. The 3D Point P 0 can be obtained by the ray intersected with the plane. By collinearity equation of a ray (Eq. 3), with the known three parameters ox, y, Z4, the unknown two parameters ox, Y4 can be calculated. That is to say, Point P 0 with a 3D coordinate (X,Y,Z) is determined. x x 0 ¼ f m 11ðX X L Þþm 12 ðy Y L Þþm 13 ðz Z L Þ m 31 ðx X L Þþm 32 ðy Y L Þþm 33 ðz Z L Þ y y 0 ¼ f m 21ðX X L Þþm 22 ðy Y L Þþm 23 ðz Z L Þ m 31 ðx X L Þþm 32 ðy Y L Þþm 33 ðz Z L Þ This equation map the 3D real world objects (X,Y,Z) to 2D image observations (x,y). Parameters of inner orientation are (x,y,f). The three angle parameters of the exterior orientation are implicitly defined in the coefficients m ij and the location of the exterior orientation are given by the terms (X L,Y L,Z L ). In the third transfer 3D point to 3D line, multi-view aerial images (Fig. 4(a)) are used to provide multiple sets of 2D lines corresponding to a real line, shown in Fig. 4(d). Two endpoints of an extracted 2D line in an aerial image can be transferred to be two 3D points in object space. Therefore, multi-view aerial images can provide many 3D points for one real step line in object space. All these 3D points are involved to construct a new 3D line by using a RANSAC-based line fitting, shown in Fig. 4(e). In those endpoints, there may exist inliers and outliers. RANSAC is here selected to construct the new segment, which can lead to the more robust results in comparison with a least squares method. The contribution of multi-view aerial imagery is to increase the reliability and accuracy of 3D line determination. Fig. 4(f) illustrates the final determined 3D step lines (gray) surrounding by 3D boundaries (black). ð3þ Fig. 4. A point-based integration mechanism for 3D step line determination. (a) A building covered by multi-view aerial images (four views), (b) 2D line to 2D point by selecting the endpoints from the extracted 2D lines, (c) 2D point to 3D point by the single point-based photogrammetric technique (S is Exposure Center, O is projection center, l is an image line, p0 and p1 are image points, L is an object line, P0 and P1 are object points), (d) Four sets of 2D lines extracted from four aerial images shown by different line styles, (e) 3D point to 3D line by RANSAC-based line fitting, and (f) The final determined 3D step lines (red) surrounding by 3D boundaries (black). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

7 L. Cheng et al. / Optics and Lasers in Engineering 51 (2013) D roof model reconstruction from 3D lines and 3D points 3D roof models are reconstructed by the segmented roof points, the determined 3D step lines, 3D ridge lines, and 3D boundaries. The main problem of this 3D reconstruction process is that the topology relationships of these line segments are lost. The SMS (Split-Merge-Shape) method proposed by Rau and Chen [24] is modified to automatically recover the topology relationships between these 3D segments. A polygon is surrounded by the boundaries (black) in Fig. 4(f). Each roof line segment (gray lines in Fig. 4(f)) is extended to split the polygon into two parts, shown in Fig. 5(a). Many small polygons surrounded by both the extended lines and boundaries are created. These small polygons are compared with their neighboring polygons. If the points in two adjacent polygons are attributed with same label, these two polygons are merged. Fig. 5(b) illustrates the merged polygons, which represent the roof structures of this building. For each polygon, RANSAC algorithm is used to fit a plane based on LiDAR points belong to it. Fig. 5(c) illustrates the reconstructed 3D roof models and their corresponding LiDAR points. 5. Evaluation An experimental region with an area of 2000m 2000 m, containing many buildings with different roof structures, different sizes, different orientations, and different roof texture conditions, are selected to validate the effect and applicability of this approach. Fig. 6(a) shows LiDAR data, which have average point spacing of 1.0 m, with a horizontal accuracy of 0.3 m and a vertical accuracy of 0.15 m. Optical aerial images with known orientation parameters are collected at the same region, illustrated in Fig. 6(b). Each of them is 3056 pixels by 2032 pixels, with a spatial resolution of 0.05 m. 3D building models constructed manually are taken as reference data to compare with the 3D models reconstructed by this approach. Autodesk 3DMax 2010, a powerful software for 3D modeling, is used for manual operation to construct 3D building models. In addition, trueorthophoto is available for checking quality of the reconstructed 3D models. The quality of the reconstructed 3D rooftop models is evaluated in two aspects: qualitative analysis (visual appearance) and quantitative analysis (correctness, completeness, and geometric accuracy) Visual appearance Fig. 7(a) illustrates the reconstructed 3D building roof models in the experimental area. Six models with different roof styles in Fig. 7(a) are shown in detail in Fig. 7(b). From Fig. 7, we can find that the proposed approach can deal with buildings with various complex roof structures. By visually comparing the models reconstructed by the proposed approach with reference data (including the models derived by manual operation, true-orthophoto, and aerial images), most of the reconstructed 3D models have a high coincidence with the reference data, which means the proposed approach can lead to reliable 3D building models with Fig. 5. 3D roof model reconstruction. (a) The splits by extending the roof lines in Fig. 4(f), (b) The recovered roof lines (gray) by merge processes, (c) The reconstructed 3D roof models and the corresponding LiDAR points. Fig. 6. LiDAR and optical images in the experimental area. (a) LiDAR data (different elevation values shown by different gray), and (b) LiDAR data and optical aerial images shown by rectangles.

8 500 L. Cheng et al. / Optics and Lasers in Engineering 51 (2013) Fig. 7. 3D building roof model reconstruction. (a) The reconstructed 3D building roof models in the experimental area, and (b) Six models with different roof styles in (a). complex roof structures. Region 1 and 2 are selected in the experimental area for the subsequent quantitative analysis Correctness and completeness The focus of this analysis is to estimate the correctness and completeness of the reconstructed roof planes in these 3D models. The correctness and completeness of 3D roof planes are calculated according to Eq. (4). Completeness ¼ TP TPþFN Correctness ¼ TP ð4þ TPþFP where TP is the number of true roof planes, FP is the number of wrong roof planes, and FN is the number of mis-detected roof planes. The roof planes derived by manually modeling are taken as reference data. By overlaying the reconstructed roof planes and reference data, the overlapped area of them is calculated. If the ratio of this overlapped area to area of the reconstructed roof plane is larger than 80%, the reconstructed roof plane is taken as true one. Otherwise, it is considered as a wrong one. If a roof plane is not be detected by the automatic process, we call it a mis-detected roof plane. Table 1 lists the correctness and completeness of the reconstructed 3D roof planes in Region 1 and 2. In Region 1, the correctness and completeness of the reconstructed 3D roof planes are 91% and 89%, respectively. In Region 2, the correctness and completeness of the reconstructed 3D roof planes are 96% and 96%, respectively. From this statistics data, compared with the results in Region 1, the proposed approach provides better results in Region 2, due to the buildings in this area have relatively simple roof structures. Furthermore, to find the problems in this proposed approach, we compare the 3D roof models derived by the proposed approach and manual modeling operation. Fig. 8 illustrates their differences: (a) some detailed planes may be reconstructed in wrong way by automatic operation, shown by A labels, (b) some artifacts, especially some small roof structures, may be fictionally created by the automatic operation, shown by B labels, (c) some tiny structures may be lost by the automatic operation, shown by C labels Geometric accuracy Another concern is the geometric accuracy estimation of these 3D roof planes. It is difficult to estimate geometric accuracy in vertical direction, because the manually constructed 3D models without enough height accuracy cannot be taken as reference data for vertical accuracy assessment. So the focus of geometric accuracy is on the horizontal accuracy estimation. The trueorthophoto is taken as reference data. Based on the analysis of

9 L. Cheng et al. / Optics and Lasers in Engineering 51 (2013) correctness and completeness, a number of correctly reconstructed roof planes are selected. For a roof plane, each vertex is compared with its corresponding point in reference data in horizontal directions, by overlaying the reconstructed 3D roof planes and true-orthophoto. For one vertex, we measure the distance from it to its reference point (the corresponding point in true-orthophoto). In Region 1 and 2, 25 and 20 vertices are selected, respectively. According to the statistics values of the selected vertices, mean error, RMSE, and maximum error are calculated for checking the accuracy of roof planes. Table 2 lists the geometric accuracy of the reconstructed 3D roof planes. Illustration of error distribution for each selected vertex is shown in Fig. 9. The short bold lines refer to errors of the vertices, which are shown as vectors given the length and orientation. In Region 1, mean error, RMSE, and Maximum error MAE are 0.30 m, 0.39 m, and 0.74 m, respectively; in Region 2, mean error, RMSE, and Maximum error MAE are 0.26 m, 0.31 m, and 0.63 m, respectively. All statistics values are much smaller than the average point spacing of LiDAR data used in the experiment, which benefited from the integration of multi-view aerial images. From Table 2 and Fig. 9, it is clear that this approach can provide 3D building roof models with high geometric accuracy. LiDAR data and optical aerial images for advancing the reliability of segmentation and avoiding information loss, and (b) a pointbased integration mechanism to combine 2D lines extracted from optical multi-view aerial images and 3D points segmented from LiDAR data for determining 3D step lines with high geometric accuracy and good details, which benefits to improving quality of the final 3D models. Further studies may be performed in integrating airborne LiDAR and ground-based data for creating 3D building models with detailed fac-ade information. In addition, the proposed approach still requires further improvement in order to automatically reconstruct very complex structures of buildings (e.g., curve structures). Table 2 The geometric accuracy of the reconstructed 3D roof planes. Region Mean error (m) RMSE (m) Maximum error (m) Region Region Conclusion A new approach by integrating LiDAR data and optical multiview aerial images is proposed in this study, to automatically obtain 3D roof models with reliable structures and accurate geometric position. The main contributions of the proposed approach are as follows: (a) a new Shrink-Expand technique to synthetically utilize the complementary data characteristics of Table 1 The correctness and completeness of the reconstructed 3D roof planes. Region True Wrong Mis-detected Correctness (%) Completeness (%) Region Region Fig. 9. Geometric accuracy of the reconstructed 3D roof planes in Region 1 (left side) and Region 2 (right side); the short lines (bold) refer to errors of roof plane corners, which are shown as vectors given the length and orientation; each error line has been enlarged 20 times. Fig. 8. Comparison of 3D roof planes derived by the proposed approach (left) and manual operation (right): A, the wrong details; B, some artefacts; C, the missed tiny planes.

10 502 L. Cheng et al. / Optics and Lasers in Engineering 51 (2013) Acknowledgment This work is supported by the National Natural Science Foundation of China (Grant No ), the National 973 Project of China (Grant No. 2012CB719904), the National Key Technology R&D Program of China (Grant No. 2012BAH28B02). Sincere thanks are given for the comments and contributions of anonymous reviewers and members of the Editorial team. References [1] Suveg I, Vosselman G. Reconstruction of 3D building models from aerial images and maps. ISPRS J Photogrammetry Remote Sensing 2004;58(3 4): [2] Cheng L, Gong J, Li M, Liu Y. 3D building model reconstruction from multiview aerial imagery and LiDAR data. Photogrammertric Eng Remote Sensing 2011;77(2): [3] Mcintosh K, Krupnik A. Integration of laser-derived DSMs and matched image edges for generating an accurate surface model. ISPRS J Photogrammetry Remote Sensing 2002;56: [4] Ma R. Building model reconstruction from LiDAR data and aerial photographs. Ph.D. dissertation. Columbus, Ohio: The Ohio State University; p. [5] Sohn G, Dowman I. A model-based approach for reconstructing terrain surface from airborne LiDAR data. The Photogrammetric Record 2008;23(122): [6] Stamos I, Allen PK. Geometry and texture recovery of scenes of large scale. J Comput Vision Image Understanding 2002;88(2): [7] Rottensteiner F, Briese C. Automatic generation of building models from LiDAR data and the integration of aerial images. International Arch Photogrammetry, Remote Sensing Spat Inf Sci 2003:34 Part 3/W13. [8] Seo S. Model-based automatic building extraction from LiDAR and aerial imagery. Ph.D. dissertation. Columbus: The Ohio State University Ohio; p. [9] Frueh C, Zakhor A. An automated method for large-scale, ground-based city model acquisition. Int J Comput Vision 2004;60(1):5 24. [10] Brenner C. Building reconstruction from images and laser scanning. Int J Appl Earth Obs Geoinf 2005;6(3 4): [11] Zhang Y, Zhang Z, Zhang J, Wu J. 3D building modeling with digital map, LiDAR data and video image sequences. The Photogrammetric Record 2005;20(111): [12] Hu J. Integrating complementary information for photorealistic representation of large-scale environments. Ph.D. dissertation, University of Southern California; California: Los Angeles; p. [13] Sampath A, Shan J. Building boundary tracing and regularization from airborne LiDAR point clouds. Photogrammetric Eng Remote Sensing 2007;73(7): [14] Sohn G, Dowman I. Data fusion of high-resolution satellite imagery and LiDAR data for automatic building extraction, ISPR. J Photogrammetry Remote Sensing 2007;62(1): [15] Haala N, Brenner C. Virtual city models from laser altimeter and 2D map data. Photogrammetric Eng Remote Sensing 1999;65(79): [16] Dorninger P, Pfeifer N. A comprehensive automated 3D approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds. Sensors 2008;8: [17] Sohn G, Huang X, Tao V. Using a Binary Space Partitioning Tree for Reconstructing Polyhedral Building Models from Airborne LiDAR Data. Photogramm Eng Remote Sensing 2008;74(11): [18] Sampath A, Shan J. Segmentation and reconstruction of polyhedral building roofs from aerial LiDAR point clouds. IEEE Trans Geosci Remote Sensing 2010;48(3): [19] Kim K, Shan J. Building roof modeling from airborne laser scanning data based on level set approach. ISPRS J Photogrammetry Remote Sensing 2011;66: [20] Vosselman G, Kessels P. The utilisation of airborn laser scanning for mapping. Int J Appl Earth Obs Geoinf 2005;6(3 4): [21] Vestri C. Using range data in automatic modeling of buildings. Image Vision Comput 2006;24(7): [22] Elberink S, Vosselman G. Quality analysis on 3D building models reconstructed from airborne laser scanning data. ISPRS J Photogrammetry Remote Sensing 2011;66(2): [23] Meer P, Georgescu B. Edge detection with embedded confidence. IEEE Trans Pattern Analy Mach Intell 2001;23(12): [24] Rau JY, Chen LC. Robust reconstruction of building models from threedimensional line segments. Photogramm Eng Remote Sensing 2003;69(2):

3D Building Model Reconstruction from Multi-view Aerial Imagery and Lidar Data

3D Building Model Reconstruction from Multi-view Aerial Imagery and Lidar Data 3D Building Model Reconstruction from Multi-view Aerial Imagery and Lidar Data Liang Cheng, Jianya Gong, Manchun Li, and Yongxue Liu Abstract A novel approach by integrating multi-view aerial imagery and

More information

BUILDING BOUNDARY EXTRACTION FROM HIGH RESOLUTION IMAGERY AND LIDAR DATA

BUILDING BOUNDARY EXTRACTION FROM HIGH RESOLUTION IMAGERY AND LIDAR DATA BUILDING BOUNDARY EXTRACTION FROM HIGH RESOLUTION IMAGERY AND LIDAR DATA Liang Cheng, Jianya Gong, Xiaoling Chen, Peng Han State Key Laboratory of Information Engineering in Surveying, Mapping and Remote

More information

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION Ruijin Ma Department Of Civil Engineering Technology SUNY-Alfred Alfred, NY 14802 mar@alfredstate.edu ABSTRACT Building model reconstruction has been

More information

FOOTPRINTS EXTRACTION

FOOTPRINTS EXTRACTION Building Footprints Extraction of Dense Residential Areas from LiDAR data KyoHyouk Kim and Jie Shan Purdue University School of Civil Engineering 550 Stadium Mall Drive West Lafayette, IN 47907, USA {kim458,

More information

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS

A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS A. Mahphood, H. Arefi *, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran,

More information

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA Changjae Kim a, Ayman Habib a, *, Yu-Chuan Chang a a Geomatics Engineering, University of Calgary, Canada - habib@geomatics.ucalgary.ca,

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS

AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS AUTOMATIC EXTRACTION OF LARGE COMPLEX BUILDINGS USING LIDAR DATA AND DIGITAL MAPS Jihye Park a, Impyeong Lee a, *, Yunsoo Choi a, Young Jin Lee b a Dept. of Geoinformatics, The University of Seoul, 90

More information

Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction

Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction Andrew McClune, Pauline Miller, Jon Mills Newcastle University David Holland Ordnance Survey Background

More information

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS Y. Postolov, A. Krupnik, K. McIntosh Department of Civil Engineering, Technion Israel Institute of Technology, Haifa,

More information

BUILDING ROOF RECONSTRUCTION BY FUSING LASER RANGE DATA AND AERIAL IMAGES

BUILDING ROOF RECONSTRUCTION BY FUSING LASER RANGE DATA AND AERIAL IMAGES BUILDING ROOF RECONSTRUCTION BY FUSING LASER RANGE DATA AND AERIAL IMAGES J.J. Jaw *,C.C. Cheng Department of Civil Engineering, National Taiwan University, 1, Roosevelt Rd., Sec. 4, Taipei 10617, Taiwan,

More information

APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD

APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD Shangshu Cai 1,, Wuming Zhang 1,, Jianbo Qi 1,, Peng Wan 1,, Jie Shao 1,, Aojie Shen 1, 1 State Key Laboratory

More information

SOME stereo image-matching methods require a user-selected

SOME stereo image-matching methods require a user-selected IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 3, NO. 2, APRIL 2006 207 Seed Point Selection Method for Triangle Constrained Image Matching Propagation Qing Zhu, Bo Wu, and Zhi-Xiang Xu Abstract In order

More information

A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO

A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO A SYSTEM OF THE SHADOW DETECTION AND SHADOW REMOVAL FOR HIGH RESOLUTION CITY AERIAL PHOTO Yan Li a, Tadashi Sasagawa b, Peng Gong a,c a International Institute for Earth System Science, Nanjing University,

More information

Unwrapping of Urban Surface Models

Unwrapping of Urban Surface Models Unwrapping of Urban Surface Models Generation of virtual city models using laser altimetry and 2D GIS Abstract In this paper we present an approach for the geometric reconstruction of urban areas. It is

More information

Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method

Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method Semi-Automatic Approach for Building Reconstruction Using SPLIT-MERGE-SHAPE Method Jiann-Yeou RAU, Liang-Chien CHEN Tel: 886-3-4227151 Ext. 7651,7627,7622 Fax: 886-3-4255535 {jyrau, lcchen} @csrsr.ncu.edu.tw

More information

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA C. K. Wang a,, P.H. Hsu a, * a Dept. of Geomatics, National Cheng Kung University, No.1, University Road, Tainan 701, Taiwan. China-

More information

AUTOMATIC RECONSTRUCTION OF BUILDING ROOFS THROUGH EFFECTIVE INTEGRATION OF LIDAR AND MULTISPECTRAL IMAGERY

AUTOMATIC RECONSTRUCTION OF BUILDING ROOFS THROUGH EFFECTIVE INTEGRATION OF LIDAR AND MULTISPECTRAL IMAGERY AUTOMATIC RECONSTRUCTION OF BUILDING ROOFS THROUGH EFFECTIVE INTEGRATION OF LIDAR AND MULTISPECTRAL IMAGERY Mohammad Awrangjeb, Chunsun Zhang and Clive S. Fraser Cooperative Research Centre for Spatial

More information

BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS INTRODUCTION

BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS INTRODUCTION BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS I-Chieh Lee 1, Shaojun He 1, Po-Lun Lai 2, Alper Yilmaz 2 1 Mapping and GIS Laboratory 2 Photogrammetric Computer Vision Laboratory Dept. of Civil

More information

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA HUANG Xianfeng State Key Laboratory of Informaiton Engineering in Surveying, Mapping and Remote Sensing (Wuhan University), 129 Luoyu

More information

BUILDING EXTRACTION AND RECONSTRUCTION FROM LIDAR DATA. Zheng Wang. EarthData International Gaithersburg, Maryland USA

BUILDING EXTRACTION AND RECONSTRUCTION FROM LIDAR DATA. Zheng Wang. EarthData International Gaithersburg, Maryland USA BUILDING EXTRACTION AND RECONSTRUCTION FROM LIDAR DATA Zheng Wang EarthData International Gaithersburg, Maryland USA zwang@earthdata.com Tony Schenk Department of Civil Engineering The Ohio State University

More information

Research on-board LIDAR point cloud data pretreatment

Research on-board LIDAR point cloud data pretreatment Acta Technica 62, No. 3B/2017, 1 16 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on-board LIDAR point cloud data pretreatment Peng Cang 1, Zhenglin Yu 1, Bo Yu 2, 3 Abstract. In view of the

More information

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Pankaj Kumar 1*, Alias Abdul Rahman 1 and Gurcan Buyuksalih 2 ¹Department of Geoinformation Universiti

More information

Cell Decomposition for Building Model Generation at Different Scales

Cell Decomposition for Building Model Generation at Different Scales Cell Decomposition for Building Model Generation at Different Scales Norbert Haala, Susanne Becker, Martin Kada Institute for Photogrammetry Universität Stuttgart Germany forename.lastname@ifp.uni-stuttgart.de

More information

AUTOMATIC BUILDING DETECTION FROM LIDAR POINT CLOUD DATA

AUTOMATIC BUILDING DETECTION FROM LIDAR POINT CLOUD DATA AUTOMATIC BUILDING DETECTION FROM LIDAR POINT CLOUD DATA Nima Ekhtari, M.R. Sahebi, M.J. Valadan Zoej, A. Mohammadzadeh Faculty of Geodesy & Geomatics Engineering, K. N. Toosi University of Technology,

More information

Building Segmentation and Regularization from Raw Lidar Data INTRODUCTION

Building Segmentation and Regularization from Raw Lidar Data INTRODUCTION Building Segmentation and Regularization from Raw Lidar Data Aparajithan Sampath Jie Shan Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive West Lafayette, IN 47907-2051

More information

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING Shi Pu International Institute for Geo-information Science and Earth Observation (ITC), Hengelosestraat 99, P.O. Box 6, 7500 AA Enschede, The

More information

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION Ruonan Li 1, Tianyi Zhang 1, Ruozheng Geng 1, Leiguang Wang 2, * 1 School of Forestry, Southwest Forestry

More information

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA

HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA HEURISTIC FILTERING AND 3D FEATURE EXTRACTION FROM LIDAR DATA Abdullatif Alharthy, James Bethel School of Civil Engineering, Purdue University, 1284 Civil Engineering Building, West Lafayette, IN 47907

More information

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Wei Guan Suya You Guan Pang Computer Science Department University of Southern California, Los Angeles, USA Abstract In this paper, we present

More information

BUILDING DETECTION IN VERY HIGH RESOLUTION SATELLITE IMAGE USING IHS MODEL

BUILDING DETECTION IN VERY HIGH RESOLUTION SATELLITE IMAGE USING IHS MODEL BUILDING DETECTION IN VERY HIGH RESOLUTION SATELLITE IMAGE USING IHS MODEL Shabnam Jabari, PhD Candidate Yun Zhang, Professor, P.Eng University of New Brunswick E3B 5A3, Fredericton, Canada sh.jabari@unb.ca

More information

Outline of Presentation. Introduction to Overwatch Geospatial Software Feature Analyst and LIDAR Analyst Software

Outline of Presentation. Introduction to Overwatch Geospatial Software Feature Analyst and LIDAR Analyst Software Outline of Presentation Automated Feature Extraction from Terrestrial and Airborne LIDAR Presented By: Stuart Blundell Overwatch Geospatial - VLS Ops Co-Author: David W. Opitz Overwatch Geospatial - VLS

More information

Polyhedral Building Model from Airborne Laser Scanning Data**

Polyhedral Building Model from Airborne Laser Scanning Data** GEOMATICS AND ENVIRONMENTAL ENGINEERING Volume 4 Number 4 2010 Natalia Borowiec* Polyhedral Building Model from Airborne Laser Scanning Data** 1. Introduction Lidar, also known as laser scanning, is a

More information

DETERMINATION OF CORRESPONDING TRUNKS IN A PAIR OF TERRESTRIAL IMAGES AND AIRBORNE LASER SCANNER DATA

DETERMINATION OF CORRESPONDING TRUNKS IN A PAIR OF TERRESTRIAL IMAGES AND AIRBORNE LASER SCANNER DATA The Photogrammetric Journal of Finland, 20 (1), 2006 Received 31.7.2006, Accepted 13.11.2006 DETERMINATION OF CORRESPONDING TRUNKS IN A PAIR OF TERRESTRIAL IMAGES AND AIRBORNE LASER SCANNER DATA Olli Jokinen,

More information

3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS

3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS 3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS Ellen Schwalbe Institute of Photogrammetry and Remote Sensing Dresden University

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Photogrammetric mapping: introduction, applications, and tools GNSS/INS-assisted photogrammetric and LiDAR mapping LiDAR mapping: principles, applications, mathematical model, and

More information

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Evangelos MALTEZOS, Charalabos IOANNIDIS, Anastasios DOULAMIS and Nikolaos DOULAMIS Laboratory of Photogrammetry, School of Rural

More information

ISPRS Journal of Photogrammetry and Remote Sensing

ISPRS Journal of Photogrammetry and Remote Sensing ISPRS Journal of Photogrammetry and Remote Sensing 83 (2013) 1 18 Contents lists available at SciVerse ScienceDirect ISPRS Journal of Photogrammetry and Remote Sensing journal homepage: www.elsevier.com/locate/isprsjprs

More information

ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION

ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION Guifeng Zhang, Zhaocong Wu, lina Yi School of remote sensing and information engineering, Wuhan University,

More information

AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING

AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING Yuxiang He*, Chunsun Zhang, Mohammad Awrangjeb, Clive S. Fraser Cooperative Research Centre for Spatial Information,

More information

THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA

THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA THE USE OF ANISOTROPIC HEIGHT TEXTURE MEASURES FOR THE SEGMENTATION OF AIRBORNE LASER SCANNER DATA Sander Oude Elberink* and Hans-Gerd Maas** *Faculty of Civil Engineering and Geosciences Department of

More information

CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES

CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES Alaeldin Suliman, Yun Zhang, Raid Al-Tahir Department of Geodesy and Geomatics Engineering, University

More information

Extraction of façades with window information from oblique view airborne laser scanning point clouds

Extraction of façades with window information from oblique view airborne laser scanning point clouds Extraction of façades with window information from oblique view airborne laser scanning point clouds Sebastian Tuttas, Uwe Stilla Photogrammetry and Remote Sensing, Technische Universität München, 80290

More information

AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING

AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING Shi Pu and George Vosselman International Institute for Geo-information Science and Earth Observation (ITC) spu@itc.nl, vosselman@itc.nl

More information

Automatic Building Extrusion from a TIN model Using LiDAR and Ordnance Survey Landline Data

Automatic Building Extrusion from a TIN model Using LiDAR and Ordnance Survey Landline Data Automatic Building Extrusion from a TIN model Using LiDAR and Ordnance Survey Landline Data Rebecca O.C. Tse, Maciej Dakowicz, Christopher Gold and Dave Kidner University of Glamorgan, Treforest, Mid Glamorgan,

More information

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,

More information

NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN

NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN NATIONWIDE POINT CLOUDS AND 3D GEO- INFORMATION: CREATION AND MAINTENANCE GEORGE VOSSELMAN OVERVIEW National point clouds Airborne laser scanning in the Netherlands Quality control Developments in lidar

More information

WAVELET AND SCALE-SPACE THEORY IN SEGMENTATION OF AIRBORNE LASER SCANNER DATA

WAVELET AND SCALE-SPACE THEORY IN SEGMENTATION OF AIRBORNE LASER SCANNER DATA WAVELET AND SCALE-SPACE THEORY IN SEGMENTATION OF AIRBORNE LASER SCANNER DATA T.Thuy VU, Mitsuharu TOKUNAGA Space Technology Applications and Research Asian Institute of Technology P.O. Box 4 Klong Luang,

More information

POSITIONING A PIXEL IN A COORDINATE SYSTEM

POSITIONING A PIXEL IN A COORDINATE SYSTEM GEOREFERENCING AND GEOCODING EARTH OBSERVATION IMAGES GABRIEL PARODI STUDY MATERIAL: PRINCIPLES OF REMOTE SENSING AN INTRODUCTORY TEXTBOOK CHAPTER 6 POSITIONING A PIXEL IN A COORDINATE SYSTEM The essential

More information

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1 Three-Dimensional Reconstruction of Large Multilayer Interchange Bridge Using Airborne LiDAR Data Liang Cheng, Yang Wu,

More information

ABSTRACT 1. INTRODUCTION

ABSTRACT 1. INTRODUCTION Published in SPIE Proceedings, Vol.3084, 1997, p 336-343 Computer 3-d site model generation based on aerial images Sergei Y. Zheltov, Yuri B. Blokhinov, Alexander A. Stepanov, Sergei V. Skryabin, Alexander

More information

Two Algorithms of Image Segmentation and Measurement Method of Particle s Parameters

Two Algorithms of Image Segmentation and Measurement Method of Particle s Parameters Appl. Math. Inf. Sci. 6 No. 1S pp. 105S-109S (2012) Applied Mathematics & Information Sciences An International Journal @ 2012 NSP Natural Sciences Publishing Cor. Two Algorithms of Image Segmentation

More information

Research on 3D building information extraction and image post-processing based on vehicle LIDAR

Research on 3D building information extraction and image post-processing based on vehicle LIDAR Cang and Yu EURASIP Journal on Image and Video Processing (2018) 2018:121 https://doi.org/10.1186/s13640-018-0356-9 EURASIP Journal on Image and Video Processing RESEARCH Research on 3D building information

More information

INTEGRATED METHOD OF BUILDING EXTRACTION FROM DIGITAL SURFACE MODEL AND IMAGERY

INTEGRATED METHOD OF BUILDING EXTRACTION FROM DIGITAL SURFACE MODEL AND IMAGERY INTEGRATED METHOD OF BUILDING EXTRACTION FROM DIGITAL SURFACE MODEL AND IMAGERY Yan Li 1, *, Lin Zhu, Hideki Shimamura, 1 International Institute for Earth System Science, Nanjing University, Nanjing,

More information

FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS

FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS FAST PRODUCTION OF VIRTUAL REALITY CITY MODELS Claus Brenner and Norbert Haala Institute for Photogrammetry (ifp) University of Stuttgart Geschwister-Scholl-Straße 24, 70174 Stuttgart, Germany Ph.: +49-711-121-4097,

More information

Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz Universität Hannover

Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz Universität Hannover 12th AGILE International Conference on Geographic Information Science 2009 page 1 of 5 Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz

More information

Building Boundary Tracing and Regularization from Airborne Lidar Point Clouds

Building Boundary Tracing and Regularization from Airborne Lidar Point Clouds Building Boundary Tracing and Regularization from Airborne Lidar Point Clouds Aparajithan Sampath and Jie Shan Abstract Building boundary is necessary for the real estate industry, flood management, and

More information

AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA Y. Li a, X. Hu b, H. Guan c, P. Liu d a School of Civil Engineering and Architecture, Nanchang University, 330031,

More information

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry Submitted to GIM International FEATURE A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry Dieter Fritsch 1, Jens Kremer 2, Albrecht Grimm 2, Mathias Rothermel 1

More information

AN AUTOMATIC 3D RECONSTRUCTION METHOD BASED ON MULTI-VIEW STEREO VISION FOR THE MOGAO GROTTOES

AN AUTOMATIC 3D RECONSTRUCTION METHOD BASED ON MULTI-VIEW STEREO VISION FOR THE MOGAO GROTTOES The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-4/W5, 05 Indoor-Outdoor Seamless Modelling, Mapping and avigation, May 05, Tokyo, Japan A AUTOMATIC

More information

AUTOMATIC RAILWAY POWER LINE EXTRACTION USING MOBILE LASER SCANNING DATA

AUTOMATIC RAILWAY POWER LINE EXTRACTION USING MOBILE LASER SCANNING DATA AUTOMATIC RAILWAY POWER LINE EXTRACTION USING MOBILE LASER SCANNING DATA Shanxin Zhang a,b, Cheng Wang a,, Zhuang Yang a, Yiping Chen a, Jonathan Li a,c a Fujian Key Laboratory of Sensing and Computing

More information

AUTOMATED MODELING OF 3D BUILDING ROOFS USING IMAGE AND LIDAR DATA

AUTOMATED MODELING OF 3D BUILDING ROOFS USING IMAGE AND LIDAR DATA AUTOMATED MODELING OF 3D BUILDING ROOFS USING IMAGE AND LIDAR DATA N. Demir *, E. Baltsavias Institute of Geodesy and Photogrammetry, ETH Zurich, CH-8093, Zurich, Switzerland (demir, manos)@geod.baug.ethz.ch

More information

AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES

AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES AUTOMATIC EXTRACTION OF BUILDING ROOFS FROM PICTOMETRY S ORTHOGONAL AND OBLIQUE IMAGES Yandong Wang Pictometry International Corp. Suite A, 100 Town Centre Dr., Rochester, NY14623, the United States yandong.wang@pictometry.com

More information

IMPROVED TARGET DETECTION IN URBAN AREA USING COMBINED LIDAR AND APEX DATA

IMPROVED TARGET DETECTION IN URBAN AREA USING COMBINED LIDAR AND APEX DATA IMPROVED TARGET DETECTION IN URBAN AREA USING COMBINED LIDAR AND APEX DATA Michal Shimoni 1 and Koen Meuleman 2 1 Signal and Image Centre, Dept. of Electrical Engineering (SIC-RMA), Belgium; 2 Flemish

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT

EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT EDGE EXTRACTION ALGORITHM BASED ON LINEAR PERCEPTION ENHANCEMENT Fan ZHANG*, Xianfeng HUANG, Xiaoguang CHENG, Deren LI State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing,

More information

FACET SHIFT ALGORITHM BASED ON MINIMAL DISTANCE IN SIMPLIFICATION OF BUILDINGS WITH PARALLEL STRUCTURE

FACET SHIFT ALGORITHM BASED ON MINIMAL DISTANCE IN SIMPLIFICATION OF BUILDINGS WITH PARALLEL STRUCTURE FACET SHIFT ALGORITHM BASED ON MINIMAL DISTANCE IN SIMPLIFICATION OF BUILDINGS WITH PARALLEL STRUCTURE GE Lei, WU Fang, QIAN Haizhong, ZHAI Renjian Institute of Surveying and Mapping Information Engineering

More information

Improvement of the Edge-based Morphological (EM) method for lidar data filtering

Improvement of the Edge-based Morphological (EM) method for lidar data filtering International Journal of Remote Sensing Vol. 30, No. 4, 20 February 2009, 1069 1074 Letter Improvement of the Edge-based Morphological (EM) method for lidar data filtering QI CHEN* Department of Geography,

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

EVALUATION OF WORLDVIEW-1 STEREO SCENES AND RELATED 3D PRODUCTS

EVALUATION OF WORLDVIEW-1 STEREO SCENES AND RELATED 3D PRODUCTS EVALUATION OF WORLDVIEW-1 STEREO SCENES AND RELATED 3D PRODUCTS Daniela POLI, Kirsten WOLFF, Armin GRUEN Swiss Federal Institute of Technology Institute of Geodesy and Photogrammetry Wolfgang-Pauli-Strasse

More information

INTEGRATION OF LIDAR AND AIRBORNE IMAGERY FOR REALISTIC VISUALIZATION OF 3D URBAN ENVIRONMENTS

INTEGRATION OF LIDAR AND AIRBORNE IMAGERY FOR REALISTIC VISUALIZATION OF 3D URBAN ENVIRONMENTS INTEGRATION OF LIDAR AND AIRBORNE IMAGERY FOR REALISTIC VISUALIZATION OF 3D URBAN ENVIRONMENTS A. F. Habib*, J. Kersting, T. M. McCaffrey, A. M. Y. Jarvis Dept. of Geomatics Engineering, The University

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

AUTOMATIC IMAGE ORIENTATION BY USING GIS DATA

AUTOMATIC IMAGE ORIENTATION BY USING GIS DATA AUTOMATIC IMAGE ORIENTATION BY USING GIS DATA Jeffrey J. SHAN Geomatics Engineering, School of Civil Engineering Purdue University IN 47907-1284, West Lafayette, U.S.A. jshan@ecn.purdue.edu Working Group

More information

RANSAC APPROACH FOR AUTOMATED REGISTRATION OF TERRESTRIAL LASER SCANS USING LINEAR FEATURES

RANSAC APPROACH FOR AUTOMATED REGISTRATION OF TERRESTRIAL LASER SCANS USING LINEAR FEATURES RANSAC APPROACH FOR AUTOMATED REGISTRATION OF TERRESTRIAL LASER SCANS USING LINEAR FEATURES K. AL-Durgham, A. Habib, E. Kwak Department of Geomatics Engineering, University of Calgary, Calgary, Alberta,

More information

Object-Based Classification & ecognition. Zutao Ouyang 11/17/2015

Object-Based Classification & ecognition. Zutao Ouyang 11/17/2015 Object-Based Classification & ecognition Zutao Ouyang 11/17/2015 What is Object-Based Classification The object based image analysis approach delineates segments of homogeneous image areas (i.e., objects)

More information

AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY

AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY T. Partovi a *, H. Arefi a,b, T. Krauß a, P. Reinartz a a German Aerospace Center (DLR), Remote Sensing Technology Institute,

More information

CE 59700: LASER SCANNING

CE 59700: LASER SCANNING Digital Photogrammetry Research Group Lyles School of Civil Engineering Purdue University, USA Webpage: http://purdue.edu/ce/ Email: ahabib@purdue.edu CE 59700: LASER SCANNING 1 Contact Information Instructor:

More information

A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS

A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS Robert Pâquet School of Engineering, University of Newcastle Callaghan, NSW 238, Australia (rpaquet@mail.newcastle.edu.au)

More information

From Multi-sensor Data to 3D Reconstruction of Earth Surface: Innovative, Powerful Methods for Geoscience and Other Applications

From Multi-sensor Data to 3D Reconstruction of Earth Surface: Innovative, Powerful Methods for Geoscience and Other Applications From Multi-sensor Data to 3D Reconstruction of Earth Surface: Innovative, Powerful Methods for Geoscience and Other Applications Bea Csatho, Toni Schenk*, Taehun Yoon* and Michael Sheridan, Department

More information

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY Jacobsen, K. University of Hannover, Institute of Photogrammetry and Geoinformation, Nienburger Str.1, D30167 Hannover phone +49

More information

MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION

MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION Mohamed Ibrahim Zahran Associate Professor of Surveying and Photogrammetry Faculty of Engineering at Shoubra, Benha University ABSTRACT This research addresses

More information

AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS

AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS Ioannis Stamos Department of Computer Science Hunter College, City University of New York 695 Park Avenue, New York NY 10065 istamos@hunter.cuny.edu http://www.cs.hunter.cuny.edu/

More information

Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgar

Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgar Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgart Geschwister-Scholl-Strae 24, 70174 Stuttgart, Germany

More information

GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES

GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES Norbert Haala, Martin Kada, Susanne Becker, Jan Böhm, Yahya Alshawabkeh University of Stuttgart, Institute for Photogrammetry, Germany Forename.Lastname@ifp.uni-stuttgart.de

More information

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006

Camera Registration in a 3D City Model. Min Ding CS294-6 Final Presentation Dec 13, 2006 Camera Registration in a 3D City Model Min Ding CS294-6 Final Presentation Dec 13, 2006 Goal: Reconstruct 3D city model usable for virtual walk- and fly-throughs Virtual reality Urban planning Simulation

More information

Geometric Rectification Using Feature Points Supplied by Straight-lines

Geometric Rectification Using Feature Points Supplied by Straight-lines Available online at www.sciencedirect.com Procedia Environmental Sciences (0 ) 00 07 Geometric Rectification Using Feature Points Supplied by Straight-lines Tengfei Long, Weili Jiao, Wei Wang Center for

More information

RECOGNISING STRUCTURE IN LASER SCANNER POINT CLOUDS 1

RECOGNISING STRUCTURE IN LASER SCANNER POINT CLOUDS 1 RECOGNISING STRUCTURE IN LASER SCANNER POINT CLOUDS 1 G. Vosselman a, B.G.H. Gorte b, G. Sithole b, T. Rabbani b a International Institute of Geo-Information Science and Earth Observation (ITC) P.O. Box

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

City-Modeling. Detecting and Reconstructing Buildings from Aerial Images and LIDAR Data

City-Modeling. Detecting and Reconstructing Buildings from Aerial Images and LIDAR Data City-Modeling Detecting and Reconstructing Buildings from Aerial Images and LIDAR Data Department of Photogrammetrie Institute for Geodesy and Geoinformation Bonn 300000 inhabitants At river Rhine University

More information

Algorithm research of 3D point cloud registration based on iterative closest point 1

Algorithm research of 3D point cloud registration based on iterative closest point 1 Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,

More information

Reconstruction of complete 3D object model from multi-view range images.

Reconstruction of complete 3D object model from multi-view range images. Header for SPIE use Reconstruction of complete 3D object model from multi-view range images. Yi-Ping Hung *, Chu-Song Chen, Ing-Bor Hsieh, Chiou-Shann Fuh Institute of Information Science, Academia Sinica,

More information

Data Representation in Visualisation

Data Representation in Visualisation Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have

More information

Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support IOP Conference Series: Earth and Environmental Science OPEN ACCESS Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support To cite this article: S Schubiger-Banz

More information

Geometric Rectification of Remote Sensing Images

Geometric Rectification of Remote Sensing Images Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in

More information

International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998

International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 RAPID ACQUISITION OF VIRTUAL REALITY CITY MODELS FROM MULTIPLE DATA SOURCES Claus Brenner and Norbert Haala

More information

Image warping and stitching

Image warping and stitching Image warping and stitching Thurs Oct 15 Last time Feature-based alignment 2D transformations Affine fit RANSAC 1 Robust feature-based alignment Extract features Compute putative matches Loop: Hypothesize

More information

DIGITAL TERRAIN MODELS

DIGITAL TERRAIN MODELS DIGITAL TERRAIN MODELS 1 Digital Terrain Models Dr. Mohsen Mostafa Hassan Badawy Remote Sensing Center GENERAL: A Digital Terrain Models (DTM) is defined as the digital representation of the spatial distribution

More information

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor

[Youn *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES AUTOMATIC EXTRACTING DEM FROM DSM WITH CONSECUTIVE MORPHOLOGICAL FILTERING Junhee Youn *1 & Tae-Hoon Kim 2 *1,2 Korea Institute of Civil Engineering

More information

EVOLUTION OF POINT CLOUD

EVOLUTION OF POINT CLOUD Figure 1: Left and right images of a stereo pair and the disparity map (right) showing the differences of each pixel in the right and left image. (source: https://stackoverflow.com/questions/17607312/difference-between-disparity-map-and-disparity-image-in-stereo-matching)

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information