INTERIOR RECONSTRUCTION USING THE 3D HOUGH TRANSFORM

Size: px
Start display at page:

Download "INTERIOR RECONSTRUCTION USING THE 3D HOUGH TRANSFORM"

Transcription

1 INTERIOR RECONSTRUCTION USING THE 3D HOUGH TRANSFORM R.-C. Dumitru, D. Borrmann, and A. Nüchter Automation Group, School of Engineering and Science Jacobs University Bremen, Germany Commission WG V/4 KEY WORDS: interior reconstruction, 3D modeling, plane detection, opening classification, occlusion reconstruction ABSTRACT: Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting. 1 INTRODUCTION Recently, 3D point cloud processing has become popular, e.g., in the context of 3D object recognition by the researchers at Willow Garage and Open Perception using their famous robot, the PR2, and their Point Cloud Library(Willow Garage, 2012) for 3D scan or Kinect-like data. This paper proposes an automatic analysis of 3Dpointclouddataonalargerscale. Itfocusesoninteriorreconstruction from 3D point cloud data. The main tool of achieving this goal will be the 3D Hough transform aiding in the efficient extraction of planes. The proposed software is added to the 3DTK The 3D Toolkit (Lingemann and Nüchter, 2012) which already contains efficient variants of the 3D Hough transform for plane extraction (Borrmann et al., 2011). Furthermore, Adan and Huber(2011) state that there have been attempts to automate interior and exterior reconstruction, but most of these attempts have tried to create realistic looking models, instead of trying to achieve a geometrically accurate one. Now that 3D point cloud acquisition technology has progressed, researchers apply it to realistically scaled real-world environments. This research proposes to extend the work of Borrmann et al. (2011) and that of Adan et al. (2011); Adan and Huber (2011) by combining the approach of plane extraction and the approach of 3D reconstruction of interiors under occlusion and clutter, to automatically reconstruct a scene at a high architectural level. Since architectural shapes of environments follow standard conventions arising from tradition or utility (Fisher, 2002) one can exploit knowledge for reconstruction of indoor environments. An interesting approach is presented in Budroni and Böhm (2005) wheresweepingplanes areusedtoextractinitialplanes andageometric model is computed by intersecting these planes. In outdoor scenarios precise facade reconstruction is popular including an interpretation step using grammars(böhm, 2008; Böhm et al., 2007; Pu and Vosselman, 2009; Ripperda and Brenner, 2009). In previous work we have focused on interpreting 3D plane modelsusingasemanticnetandoncorrectingsensordataofacustommade3dscanner formobile robots(nüchteretal.,2003). Inthis paper, we focus on precisely reconstructing a room model from a 3D geometric perspective. 3 PLANE DETECTION The basis of the plane detection algorithm is the patent published by Paul Hough in 1962 (Hough, 1962). The common representation for a plane is the signed distance ρ to the origin of the coordinate system, and the slopes m x and m y in direction of the x- and y-axis. All in all, we parametrize a plane as z = Throughout the paper, we will demonstrate the algorithms using 3D data acquired with a Riegl VZ-400 3D laser scanner in a basement room (cf. Figure 1). After presenting related work, we describe our fast plane detection which is extended to reliably model walls. Afterwards, we describe our implementations of occlusion labeling and opening detection before we finally reconstruct the occluded part of the point cloud using inpainting. 2 RELATED WORK Thereisalargebodyofresearchdone inthisfield,mainlyfocusing on creating an aesthetically accurate model, rather than a geometrically accurate one (Adan and Huber, 2011). Papers that approach the issuefrom suchapoint of view arefrühetal.(2005); Hähnel et al. (2003); Stamos et al. (2006); Thrun et al. (2004). Figure 1: Example environment in a basement used throughout this paper.

2 Figure 2: Plane detection in the fragmented basement room. Top: Detected planes. Bottom: Planes and 3D points. m x x + m y y + ρ. Toavoidproblemscausedbyinfiniteslopes during the representation of vertical planes, the Hesse Normal Form is used, a method based on normal vectors. The plane is thereby given byapoint p onthe plane, the normal vector n that is perpendicular to the plane, and the distance ρ to the origin. Therefore, the parametrization of the plane becomes: ρ = p n = p xn x + p yn y + p zn z. (1) Considering the angles between the normal vector and the coordinate system, the coordinates of n are factorized to p x cos θ sin ϕ + p y sin ϕ sin θ + p z cos ϕ = ρ, (2) with θ the angle of the normal vector on the xy-plane and ϕ the angle between the xy-plane and the normal vector in z direction. To find planes in a point set, we transform each point into the Hough Space (θ, ϕ, ρ). Given a point p in Cartesian coordinates, we must determine all the planes the point lies on. Marking these planes in the Hough Space will lead to a 3D sinusoid curve. The intersections of two curves in Hough Space denote the set of all possible planes rotated around the lines defined by two points. Consequently, the intersection of three curves in Hough Space will correspond to the polar coordinates defining the plane spanned by the three points. For practical applications the Hough Space is typically discretized. Each cell in the so-called accumulatorcorresponds tooneplaneandacounterisincreasedforeach point that lies on this plane. Among all Hough variants the Randomized Hough Transform (Xu et al., 1990) is the one of most interest tous as Borrmannet al.(2011) concludes thatit has exceptional performance as far as runtime is concerned. Xu et al. (1990) describe the Randomized Hough Transform (RHT) that decreases the number of cells touched by exploiting the fact that a curve with n parameters is defined by n points. For detecting planes, three points from the input space are mapped onto one point in the Hough Space. This point is the one corresponding to the plane spanned by the three points. In each step the procedure randomly picks three points p 1, p 2 and p 3 from the point cloud. The plane spanned by the three points is calculated as ρ = n p 1 = ((p 3 p 2) (p 1 p 2)) p 1. ϕ and θ are calculated from the normal vectors and the corresponding cell A(ρ, ϕ, θ) is accumulated. If the point cloud contains a plane with ρ, ϕ, θ, after a certain number of iterations there will be a highscore at A(ρ, ϕ, θ). When a plane is represented by a large number of points, it is more likely that three points from this plane are randomly selected. Eventually the cells corresponding to actual planes receive more votes and are distinguishable from the other cells. If points areveryfarapart,theymostlikelydonotbelongtooneplane. To take care of this and to diminish errors from sensor noise a distance criterion is introduced: distmax(p 1, p 2, p 3) distmax, i.e., the maximum point-to-point distance between p 1, p 2 and p 3 is below a fixed threshold; for minimum distance, analogous. The basic algorithm is structured as described in Algorithm 1. Algorithm 1 Randomized Hough Transform (RHT) 1: whilestillenough points inpoint set P do 2: randomly pickpoints p 1, p 2, p 3 from the setof points P 3: if p 1, p 2, p 3 fulfillthe distance criterionthen 4: calculate plane (θ, ϕ, ρ)spanned by p 1, p 2, p 3 5: increment cell A(θ, ϕ, ρ) in the accumulator space 6: if the counter A(θ, ϕ, ρ) equals the threshold t then 7: (θ, ϕ, ρ) parametrize the detected plane 8: delete all points close to (θ, ϕ, ρ)from P 9: reset the accumulator 10: end if 11: else 12: continue 13: end if 14: end while The RHT has several main advantages. Not all points have to be processed, and for those points considered no complete Hough Transform is necessary. Instead, the intersection of three Hough Transform curves is marked in the accumulator to detect the curves one by one. Once there are three points whose plane leads to an accumulation value above a certain threshold t, all points lying

3 on that plane are removed from the input and hereby the detection efficiency is increased. Since the algorithm does not calculate the complete Hough Transform for all points, it is likely that not the entire Hough Space needs to be touched. There will be many planes on which no input point lies. This calls for space saving storage procedures that only store the cells actually touched by the Hough Transform. Figure 2 gives an example of the implemented plane detection. 4 WALL DETECTION Starting from the Hough Transform this component returns a collection of planes which are separated into vertical and horizontal surfaces using a predefined epsilon value as part of the initial configuration. All planes that do not fit into this category are simply discarded. We continue with deciding which planes represent the walls, the ceiling and the floor. The ceiling and the floor can easily be extracted from the given information by looking for the lowest and highest horizontal planes. On the other hand, selecting walls from the vertical planes requires additional computation. The implementation of the Hough Transform returns connected planar patches. The center of gravity of the points contained in thehullofaplanecorresponds totheapplicationpointofthenormal in the Hesse normal form. Projecting the points onto a horizontal plane and extracting the concave hull gives a vague notion of the shape of the surface. As opposed to the convex hull the concave hull is not clearly defined. However, its usage enables one todetect the wallsof concave rooms. For each planar patch the concave hull of the points projected onto it is calculated. To determine the actual walls similar planes are grouped and the parameters of all wall candidates representing the same wall are averaged. The resulting wall candidates and the highest and lowest horizontal plane are used to compute the corners of the room by computing the intersection of each wall candidate withtheceilingandthefloor. Asaresult,wenow have a set of points representing the corners of the room. Again, we use these points to compute new planar patches, including their hulls,whicharenowthefinalwalls,ceilingandflooroftheroom. See Figure 3 for an example of detected and intersected walls. 5 OCCLUSION LABELING This component of the pipeline is based on approaches borrowed from computer graphics: Octrees (Elseberg et al., 2013), Bresenham s line algorithm and ray tracing (Bresenham, 1965). An octree is a data structure to efficiently store and access spatial point data. The entire space is iteratively divided into voxels of decreasing size. Efficiency is achieved by only further subdividing those voxels that contain data. For occlusion labeling we employ the voxels that intersect with a surface. Each voxel of a surface is assigned one of three possible labels: occupied, empty or occluded. The labeling takes place based on the position of the scanner The decision between empty or occupied is done by checking whether a surface was detected at that location or not, i.e., whether a point was recorded in that area. Distinguishing between an empty or occluded voxel is not possible at this point. This problem is addressed by performing ray tracing to explicitly decide between occluded or empty. Merging data from several scanning positions can help to improve the results by considering the sight of the surface at hand and merging the labels into one unified representation. This component of the pipeline yields to images for each surface, namely a labels image and a depth image. The labels image will laterhelpustodetermine3ofthe14featuresforthesupportvector Machine (SVM) used to classify openings. The depth image will be used to detect lines and construct opening candidates for the SVM. Many features for this component of the pipeline implement ideas from Adan and Huber (2011). The surfaces with openings are of most interest. Figure 4 presents an example of the labels image and the depth image. 6 OPENING DETECTION For this component of our processing pipeline we continue to base our approach on ideas from Adan and Huber (2011). We need to differentiate between actual openings on the surfaces and occluded regions which are basically missing information that needs to be filled in. We make use of the labels and depth images computed in the previous section. A Support Vector Machine (SVM) is used to classify opening candidates and k-means clustering is used to determine which candidates belong together. Figure 3: Detected walls in the basement scan. Figure 4: Top: Labels (red occupied, blue occluded, green empty). Bottom: Corresponding depth image.

4 Figure 5: Line detection in generated range images. Top: Image I 4. Bottom: Hough lines. The result of this component is a complete 3D model of the interior and a 2D mask which is used to reconstruct the point cloud later on. We start from the depth image I that has been computed previously. We apply the Canny edge detection algorithm and the Sobel Operator on the horizontal and vertical direction. The result is three different images I 1, I 2, and I 3, respectively. These images are merged into one image and a threshold is applied to every non-zero pixel. This yields a binary image I 4 with enhanced edges. We then proceed with applying the Hough line detection algorithm on I 4 which gives us all the lines in the image. We keep only vertical and horizontal lines and construct all the possible rectangles using these lines. Each rectangle represents one opening candidate. Many of these are already discarded taking into consideration various predefined values such as the minimum and maximum area of an opening. The Sobel Operator was used in addition to the Canny edge detector to make the opening detection algorithm more robust, ensuring that it does not miss any important candidates. Hough line detection on Canny edge alone turned out to be not sufficient to detect the proper lines for our data. Figures 5(top) display the result. In Figure 5(bottom) vertical lines are drawn in red and horizontal linesaredrawninblue. Wetakecombinationsoftwoverticaland two horizontal lines to determine an opening candidate. There will be many redundant candidates, where large clusters of horizontal lines have been detected. Once we have determined all the opening candidates, we compute a vector of 14 features for each candidate as described in Adan and Huber (2011): (1) candidate absolute area; (2) candidate width / height ratio; (3) candidate width / surface width ratio; (4) candidate height / surface height ratio; (5) distance from upper side of candidate to upper side of surface; (6) distance from lower side of candidate to lower side of surface; (7) distance from left side of candidate to left side of surface; (8) distance from right side of candidate to right side of surface; (9) rootmeansquared error tothe bestplane fittedtothe rectangle surface;(10) percentage of occupied area;(11) percentage of occluded area;(12) percentage of empty area;(13) number of interior rectangles; and(14) number of inverted U-shaped rectangles (specific to doors). Thelabelsimageisusedtodeterminefeatures10,11and12. The feature vectors are used to classify the candidates using a Support Vector Machine (SVM). To train the SVM, a small database was put together with features of actual openings from various buildings across the campus of Jacobs University Bremen, Germany. It is well known that a Support Vector Machine (SVM) is Figure 6: Top: SVM learner applied to the Hough lines as presented in Figure 5. Bottom: Result of k-means clustering and check for overlapping candidates. a learning system that performs classification tasks by constructing hyper planes in a multidimensional space. We used a Radial Basis Function (RBF) as a kernel in a binary SVM model. After trainingthesvmwegive each candidate as input. The SVMdecides whether the candidate represents an opening or not. In the end, the candidates left are close representatives of the ground truth used to train the SVM with. Having kept the most relevant opening candidates, many of these actually represent the same opening. We proceed by clustering the candidates using k- means clustering. This is a method of cluster analysis that aims to cluster n observations, our opening candidates, into k clusters in which each observation belongs to the cluster with the nearest mean. The features we use to differentiate between the candidates are the center and the absolute area of the candidate, because we might have large candidates from one cluster centered at the same position with smaller candidates. After deciding which cluster the candidates belong to, we need to determine the shape of the final opening. It is important to note that the number of detected clusters is the same withthe number of openings on the real surface. We proceed by choosing for each cluster the best candidate which has the largest empty area, and the smallest occupied, and occluded areas. This candidate is the cluster representative and is kept as the final opening that represents that particular cluster(cf. Figure 6). After computing the final 2D openings on the depth image, we proceed to relabel all remaining empty labels to occluded. This yields the inpaint mask I m, that is used for the inpainting (gap filling) algorithm(salamanca et al., 2008; Adan and Huber, 2011). Basically, the inpaint mask I m, tells us what patches of the surface are actually missing information and need to be filled in. Figure 7 (top) shows an inpaint mask. Notice that the amount of reconstruction required for this basement surface is approximately 50% occluded making it significantly challenging for the reconstruction. In the next component we present a simple way of reconstructing the actual 3D data from the 2D depth map and the inpaint mask. 7 OCCLUSTION RECONSTRUCTION The last component of the automated processing is to generate a reconstructed version of the initial point cloud, filling in the missing information induced by occlusions. This component is fairly simple, and requires the use of a gap filling algorithm. We begin by taking the original depth image I, for every surface andapplyagapfillingalgorithmonit. Weusedtheinpaintingal-

5 Figure 8: Views of the reconstructed 3D point cloud. The light red points have been added through the inpaint process. After computing the reconstructed depth image and adding noise, weproceed byprojecting I f backinto3d.this yieldsthe reconstructed surface point cloud. Figure 9 shows the reconstructed depth image, from which the 3D point cloud is extrapolated( cf. Figure 8. 8 EXPERIMENTS AND RESULTS The interior modeling software is implemented in C/C++ and is part of the Open Source software 3DTK - The 3D Toolkit. The input were 3D point clouds acquired by a Riegl VZ-400 terrestrial laser scanner. The point clouds were reduced to approximately 5 million points per 3D scan by applying an octree based reduction down to one point per cubic centimeter. Figure 7: Top: Inpaint mask. Bottom: Final 2D opening. Throughout the text we have visualized the results of all intermediate steps of our pipeline using a data set acquired in the basement of the Research I building at Jacobs University Bremen, Germany. The basement data is very good for testing our pipeline. Even though it was very fragmented for plane detection, the pipeline was able to accurately determine the six surfaces of Figure 9: Reconstructed depth image. gorithm provided in OpenCV, namely the Navier-stokes method(bertalmio et al., 2001). We use the inpaint mask computed in the previous component I m. The result of this process is a reconstructed 2D depth image I r, having all the missing information filled in. We then proceed to computing the mean and standard deviation of the original depth image, in the depth axis, and add Gaussian noise with the same mean and standard deviation to the newly reconstructed image, thus yielding the final reconstructed depth image of the surface I f. Adding Gaussian noise to the reconstructeddepthimageproducesamorenaturallookandfeelofthe inpainted parts. Figure 10: Seminar room.

6 the interior. It was ideal for the surface labeling, since it contained large areas of all three labels, mainly occluded areas, due to numerous pipes runningfrom one sideof theroom totheother. For opening detection, the only opening on the surface is occluded by a vertical pipe, which challenges the SVM in properly choosing candidates. In the case of windows with thick frames or window mounted fans, the algorithm is able to avoid detecting multiple smaller windows, because it checks for overlapping candidates, intheend keeping the bestcandidate withthe largestempty area, which is always the whole window. Furthermore, we want to report the results on a second data set, which we acquired in a typical seminar room at our university (cf. Figure 10). It features two windows and the walls are less occluded. For the seminar room data the task for the SVM is simplified since the openings on the surface do not feature any obstructions. Almost all the opening candidates marked in red overlap very closely with the real opening. Clustering is again trivial, as there are clearly two easily distinguishable clusters among the opening candidates. As before, the best candidate from each cluster is chosen as a final opening. We again inpaint the range image and reconstruct the 3D point cloud as shown infigure CONCLUSIONS AND OUTLOOK This paper presents a complete and Open Source system for architectural 3D interior reconstruction from 3D point clouds. It consists of fast 3D plane detection, geometry computation by efficiently calculating intersection of merged planes, semantic labeling in terms of occupied, occluded and empty parts, opening detection using machine learning, and finally 3D point cloud inpainting to recover occluded parts. The implementation is based on OpenCV and the 3DTK The 3D Toolkit and is available on Sourceforge (Lingemann and Nüchter, 2012). Identifying and restoring hidden areas behind furniture and cluttered objects in indoor scenes is challenging. In this research we approached this task by dividing it into four components, each one relying on the outcome of the previous component. The wall detection and labeling components are highly-reliable, however, for the opening detection and the reconstruction scene specific fine tuning was needed. One reason is that the opening detection vastly depends on the data used to train the Support Vector Machine. This is one of the drawbacks when using supervised learning methods. Moreover, we considered windows to be of rectangular shape. Although, this is a valid assumption, attempting to detect windows of various shapes would severely increase the difficulty of the task. For the reconstruction component, despite the fact that we are being provided a reliable inpaint mask from the previous component, it is difficult to assume what is actually in the occluded areas. 2D inpainting is one approach in approximating what the surface would look like. We have to state, that the best solution to reducing occlusion is to take multiple scans. Thus, future work will incorporate multiple scans and address the next-best-view planning problem. So far we have presented a pipeline based on four major components. This modularity offers many advantages as each component is replaceable by any other approach. In the future, other components will be added and we will allow for any combination of these components to be used. One potential improvement for the opening detection component would be the use of reflectance images. This would greatly increase the robustness of this component. We would be able to generate fewer, more appropriate opening candidates, instead of generating all the possible ones from the extracted lines. This can be achieved by exploiting the intensity information contained in the reflectance image. Needless to say a lot of work remains to be done. The current purpose of our software is to simplify and automate the generation of building models. Our next step will be to extend the pipeline with an additional component that will put together multiple scans of interiors and generate the model and reconstructed point cloud of an entire building. Another way of extending the pipeline is to add a component that would combine geometrical analysis of point clouds and semantic rules to detect 3D building objects. An approach would be to apply a classification of related knowledge as definition, and to use partial knowledge and ambiguous knowledge to facilitate the understanding and the design of a building (Duan et al., 2010). References Adan, A. and Huber, D., D reconstruction of interior wall surfaces under occlusion and clutter. In: Proceedings of 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT). Adan, A., Xiong, X., Akinci, B. and Huber, D., Automatic creation of semantically rich 3D building models from laser scanner data. In: Proceedings of the International Symposium on Automation and Robotics in Construction (ISARC). Bertalmio, M., Bertozzi, A. and Sapiro, G., Navier-stokes, fluid dynamics, and image and video inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 01)., pp Böhm, J., Facade detail from incomplete range data. In: Proceedings of the ISPRS Congress, Beijing, China. Böhm, J., Becker, S. and Haala, N., Model refinement by integrated processing of laser scanning and photogrammetry. In: Proceedings of 3D Virtual Reconstruction and Visualization of Complex Architectures (3D- Arch), Zurich, Switzerland. Borrmann, D., Elseberg, J., Lingemann, K. and Nüchter, A., The 3DHough Transformfor plane detection in point clouds: Areview and a new accumulator design. 3D Research. Bresenham, J. E., Algorithm for computer control of a digital plotter. IBM Systems Journal 4 4(1), pp Budroni, A. and Böhm, J., Toward automatic reconstruction of interiors from laser data. In: Proceedings of 3D-Arch, Zurich, Switzerland. Duan, Y., Cruz, C. and Nicolle, C., Architectural reconstruction of 3D building objects through semantic knowledge management. In: ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing. Elseberg, J., Borrmann, D. and Nüchter, A., One billion points in the cloud an octree for efficient processing of 3d laser scans. Journal Potogrammetry and Remote Sensing. Fisher, R. B., Applying knowledge to reverse engeniering problems. In: Proceedings of the International Conference. Geometric Modeling and Processing (GMP 02), Riken, Japan, pp Früh, C., Jain, S. and Zakhor, A., Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images. International Journal of Computer Vision (IJCV) 61(2), pp Hähnel, D., Burgard, W. and Thrun, S., Learning compact 3D models of indoor and outdoor environments with a mobile robot. Robotics and Autonomous Systems 44(1), pp Hough, P., Method and means for recognizing complex patterns. US Patent Lingemann, K. and Nüchter, A., The 3D Toolkit.

7 Figure 11: Top row: Detected planes in a seminar room. Second row: Detected planes as overlay to the 3D point cloud. Third row: Image I 4 and the Hough lines. Forth row: Opening candidates and the result of the k-means clustering. Fifth row: Inpaint mask and final 2D openings. Bottom: Reconstructed depth image.

8 Figure 12: Reconstructed 3D point clouds in the seminar room. Nüchter, A., Surmann, H., Lingemann, K. and Hertzberg, J., Semantic Scene Analysis of Scanned 3D Indoor Environments. In: Proceedings of the of the 8th International Fall Workshop Vision, Modeling, and Visualization (VMV 03), Munich, Germany, pp Pu, S. and Vosselman, G., Knowledge based reconstruction of building models from terrestrial laser scanning data. Journal of Photogrammetry and Remote Sensing 64(6), pp Ripperda, N. and Brenner, C., Application of a formal grammar to facade reconstruction in semiautomatic and automatic environments. In: Proceedings of AGILE Conference on Geographic Information Science, Hannover, Germany. Salamanca, S., Merchan, P., Perez, E., Adan, A. and Cerrada, C., Filling holes in 3D meshes using image restoration algorithms. In: Proceedings of the Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT), Atlanta, Georgia, United States of America. Stamos, I., Yu, G., Wolberg, G. and Zokai, S., Application of a formal grammar to facade reconstruction in semiautomatic and automatic environments. In: Proceedings of 3D Data Processing, Visualization, and Transmission (3DPVT), pp Thrun, S., Martin, C., Liu, Y., Hähnel, D., Montemerlo, R. E., Chakrabarti, D. and Burgard, W., A real-time expectation maximization algorithm for acquiring multi-planar maps of indoor environments with mobile robots. IEEE Transactions on Robotics 20(3), pp Willow Garage, Point cloud library. Xu, L., Oja, E. and Kultanen, P., A new curve detectionmethod: Randomized Hough Transform (RHT). Pattern Recognition Letters 11, pp

Interior Reconstruction under Occlusion and Clutter, based on the 3D Hough Transform

Interior Reconstruction under Occlusion and Clutter, based on the 3D Hough Transform School of Engineering and Science Bachelor s Thesis Interior Reconstruction under Occlusion and Clutter, based on the 3D Hough Remus-Claudiu Dumitru May 2012 First supervisor: Prof. Dr. Andreas Nüchter

More information

Methods for Automatically Modeling and Representing As-built Building Information Models

Methods for Automatically Modeling and Representing As-built Building Information Models NSF GRANT # CMMI-0856558 NSF PROGRAM NAME: Automating the Creation of As-built Building Information Models Methods for Automatically Modeling and Representing As-built Building Information Models Daniel

More information

DETECTION, MODELING AND CLASSIFICATION OF MOLDINGS FOR AUTOMATED REVERSE ENGINEERING OF BUILDINGS FROM 3D DATA

DETECTION, MODELING AND CLASSIFICATION OF MOLDINGS FOR AUTOMATED REVERSE ENGINEERING OF BUILDINGS FROM 3D DATA DETECTION, MODELING AND CLASSIFICATION OF MOLDINGS FOR AUTOMATED REVERSE ENGINEERING OF BUILDINGS FROM 3D DATA ) Enrique Valero 1 *, Antonio Adan 2, Daniel Huber 3 and Carlos Cerrada 1 1 Escuela Técnica

More information

Uncertainties: Representation and Propagation & Line Extraction from Range data

Uncertainties: Representation and Propagation & Line Extraction from Range data 41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty

More information

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA Engin Burak Anil 1 *, Burcu Akinci 1, and Daniel Huber 2 1 Department of Civil and Environmental

More information

Building Reliable 2D Maps from 3D Features

Building Reliable 2D Maps from 3D Features Building Reliable 2D Maps from 3D Features Dipl. Technoinform. Jens Wettach, Prof. Dr. rer. nat. Karsten Berns TU Kaiserslautern; Robotics Research Lab 1, Geb. 48; Gottlieb-Daimler- Str.1; 67663 Kaiserslautern;

More information

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE S. Hirose R&D Center, TOPCON CORPORATION, 75-1, Hasunuma-cho, Itabashi-ku, Tokyo, Japan Commission

More information

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES Jie Shao a, Wuming Zhang a, Yaqiao Zhu b, Aojie Shen a a State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing

More information

AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING

AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING AUTOMATIC EXTRACTION OF BUILDING FEATURES FROM TERRESTRIAL LASER SCANNING Shi Pu and George Vosselman International Institute for Geo-information Science and Earth Observation (ITC) spu@itc.nl, vosselman@itc.nl

More information

Permanent Structure Detection in Cluttered Point Clouds from Indoor Mobile Laser Scanners (IMLS)

Permanent Structure Detection in Cluttered Point Clouds from Indoor Mobile Laser Scanners (IMLS) Permanent Structure Detection in Cluttered Point Clouds from NCG Symposium October 2016 Promoter: Prof. Dr. Ir. George Vosselman Supervisor: Michael Peter Problem and Motivation: Permanent structure reconstruction,

More information

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Perception IV: Place Recognition, Line Extraction

Perception IV: Place Recognition, Line Extraction Perception IV: Place Recognition, Line Extraction Davide Scaramuzza University of Zurich Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart 1 Outline of Today s lecture Place recognition using

More information

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds 1 Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds Takeru Niwa 1 and Hiroshi Masuda 2 1 The University of Electro-Communications, takeru.niwa@uec.ac.jp 2 The University

More information

Semantic As-built 3D Modeling of Buildings under Construction from Laser-scan Data Based on Local Convexity without an As-planned Model

Semantic As-built 3D Modeling of Buildings under Construction from Laser-scan Data Based on Local Convexity without an As-planned Model Semantic As-built 3D Modeling of Buildings under Construction from Laser-scan Data Based on Local Convexity without an As-planned Model H. Son a, J. Na a, and C. Kim a * a Department of Architectural Engineering,

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA Changjae Kim a, Ayman Habib a, *, Yu-Chuan Chang a a Geomatics Engineering, University of Calgary, Canada - habib@geomatics.ucalgary.ca,

More information

An Extension to Hough Transform Based on Gradient Orientation

An Extension to Hough Transform Based on Gradient Orientation An Extension to Hough Transform Based on Gradient Orientation Tomislav Petković and Sven Lončarić University of Zagreb Faculty of Electrical and Computer Engineering Unska 3, HR-10000 Zagreb, Croatia Email:

More information

Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data

Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data Xuehan Xiong b, Antonio Adan a, Burcu Akinci c, Daniel Huber b, a Department of Electrical Engineering, Electronics, and

More information

Cell Decomposition for Building Model Generation at Different Scales

Cell Decomposition for Building Model Generation at Different Scales Cell Decomposition for Building Model Generation at Different Scales Norbert Haala, Susanne Becker, Martin Kada Institute for Photogrammetry Universität Stuttgart Germany forename.lastname@ifp.uni-stuttgart.de

More information

INDOOR 3D MODEL RECONSTRUCTION TO SUPPORT DISASTER MANAGEMENT IN LARGE BUILDINGS Project Abbreviated Title: SIMs3D (Smart Indoor Models in 3D)

INDOOR 3D MODEL RECONSTRUCTION TO SUPPORT DISASTER MANAGEMENT IN LARGE BUILDINGS Project Abbreviated Title: SIMs3D (Smart Indoor Models in 3D) INDOOR 3D MODEL RECONSTRUCTION TO SUPPORT DISASTER MANAGEMENT IN LARGE BUILDINGS Project Abbreviated Title: SIMs3D (Smart Indoor Models in 3D) PhD Research Proposal 2015-2016 Promoter: Prof. Dr. Ir. George

More information

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING Shi Pu International Institute for Geo-information Science and Earth Observation (ITC), Hengelosestraat 99, P.O. Box 6, 7500 AA Enschede, The

More information

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich. Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Correcting User Guided Image Segmentation

Correcting User Guided Image Segmentation Correcting User Guided Image Segmentation Garrett Bernstein (gsb29) Karen Ho (ksh33) Advanced Machine Learning: CS 6780 Abstract We tackle the problem of segmenting an image into planes given user input.

More information

DOOR RECOGNITION IN CLUTTERED BUILDING INTERIORS USING IMAGERY AND LIDAR DATA

DOOR RECOGNITION IN CLUTTERED BUILDING INTERIORS USING IMAGERY AND LIDAR DATA DOOR RECOGNITION IN CLUTTERED BUILDING INTERIORS USING IMAGERY AND LIDAR DATA L. Díaz-Vilariño a, *, J. Martínez-Sánchez a, S. Lagüela a, J. Armesto a, K. Khoshelham b a Applied Geotechnologies Research

More information

VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING

VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING VOLUMETRIC MODEL REFINEMENT BY SHELL CARVING Y. Kuzu a, O. Sinram b a Yıldız Technical University, Department of Geodesy and Photogrammetry Engineering 34349 Beşiktaş Istanbul, Turkey - kuzu@yildiz.edu.tr

More information

Image Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations

Image Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations Image Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations Mehran Motmaen motmaen73@gmail.com Majid Mohrekesh mmohrekesh@yahoo.com Mojtaba Akbari mojtaba.akbari@ec.iut.ac.ir

More information

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION

BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION Ruijin Ma Department Of Civil Engineering Technology SUNY-Alfred Alfred, NY 14802 mar@alfredstate.edu ABSTRACT Building model reconstruction has been

More information

CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES

CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES CELL DECOMPOSITION FOR THE GENERATION OF BUILDING MODELS AT MULTIPLE SCALES Norbert Haala, Susanne Becker, Martin Kada Institute for Photogrammetry, Universitaet Stuttgart Geschwister-Scholl-Str. 24D,

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS INTRODUCTION

BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS INTRODUCTION BUILDING POINT GROUPING USING VIEW-GEOMETRY RELATIONS I-Chieh Lee 1, Shaojun He 1, Po-Lun Lai 2, Alper Yilmaz 2 1 Mapping and GIS Laboratory 2 Photogrammetric Computer Vision Laboratory Dept. of Civil

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Semantic Mapping and Reasoning Approach for Mobile Robotics

Semantic Mapping and Reasoning Approach for Mobile Robotics Semantic Mapping and Reasoning Approach for Mobile Robotics Caner GUNEY, Serdar Bora SAYIN, Murat KENDİR, Turkey Key words: Semantic mapping, 3D mapping, probabilistic, robotic surveying, mine surveying

More information

Lecture 9: Hough Transform and Thresholding base Segmentation

Lecture 9: Hough Transform and Thresholding base Segmentation #1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting

More information

Probabilistic Matching for 3D Scan Registration

Probabilistic Matching for 3D Scan Registration Probabilistic Matching for 3D Scan Registration Dirk Hähnel Wolfram Burgard Department of Computer Science, University of Freiburg, 79110 Freiburg, Germany Abstract In this paper we consider the problem

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK 1 Po-Jen Lai ( 賴柏任 ), 2 Chiou-Shann Fuh ( 傅楸善 ) 1 Dept. of Electrical Engineering, National Taiwan University, Taiwan 2 Dept.

More information

Advanced point cloud processing

Advanced point cloud processing Advanced point cloud processing George Vosselman ITC Enschede, the Netherlands INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Laser scanning platforms Airborne systems mounted

More information

FEATURE-BASED REGISTRATION OF RANGE IMAGES IN DOMESTIC ENVIRONMENTS

FEATURE-BASED REGISTRATION OF RANGE IMAGES IN DOMESTIC ENVIRONMENTS FEATURE-BASED REGISTRATION OF RANGE IMAGES IN DOMESTIC ENVIRONMENTS Michael Wünstel, Thomas Röfer Technologie-Zentrum Informatik (TZI) Universität Bremen Postfach 330 440, D-28334 Bremen {wuenstel, roefer}@informatik.uni-bremen.de

More information

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors Kosei Ishida 1 1 School of

More information

HOUGH TRANSFORM FOR INTERIOR ORIENTATION IN DIGITAL PHOTOGRAMMETRY

HOUGH TRANSFORM FOR INTERIOR ORIENTATION IN DIGITAL PHOTOGRAMMETRY HOUGH TRANSFORM FOR INTERIOR ORIENTATION IN DIGITAL PHOTOGRAMMETRY Sohn, Hong-Gyoo, Yun, Kong-Hyun Yonsei University, Korea Department of Civil Engineering sohn1@yonsei.ac.kr ykh1207@yonsei.ac.kr Yu, Kiyun

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

Range Image Segmentation for Modeling and Object Detection in Urban Scenes

Range Image Segmentation for Modeling and Object Detection in Urban Scenes Range Image Segmentation for Modeling and Object Detection in Urban Scenes Cecilia Chao Chen Ioannis Stamos Graduate Center / CUNY Hunter College / CUNY New York, NY 10016 New York, NY 10021 cchen@gc.cuny.edu

More information

DESIGN OF AN INDOOR MAPPING SYSTEM USING THREE 2D LASER SCANNERS AND 6 DOF SLAM

DESIGN OF AN INDOOR MAPPING SYSTEM USING THREE 2D LASER SCANNERS AND 6 DOF SLAM DESIGN OF AN INDOOR MAPPING SYSTEM USING THREE 2D LASER SCANNERS AND 6 DOF SLAM George Vosselman University of Twente, Faculty ITC, Enschede, the Netherlands george.vosselman@utwente.nl KEY WORDS: localisation,

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Christian Weiss and Andreas Zell Universität Tübingen, Wilhelm-Schickard-Institut für Informatik,

More information

AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS

AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS Ioannis Stamos Department of Computer Science Hunter College, City University of New York 695 Park Avenue, New York NY 10065 istamos@hunter.cuny.edu http://www.cs.hunter.cuny.edu/

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Exploiting Indoor Mobile Laser Scanner Trajectories for Interpretation of Indoor Scenes

Exploiting Indoor Mobile Laser Scanner Trajectories for Interpretation of Indoor Scenes Exploiting Indoor Mobile Laser Scanner Trajectories for Interpretation of Indoor Scenes March 2018 Promoter: Prof. Dr. Ir. George Vosselman Supervisor: Michael Peter 1 Indoor 3D Model Reconstruction to

More information

CS 4758: Automated Semantic Mapping of Environment

CS 4758: Automated Semantic Mapping of Environment CS 4758: Automated Semantic Mapping of Environment Dongsu Lee, ECE, M.Eng., dl624@cornell.edu Aperahama Parangi, CS, 2013, alp75@cornell.edu Abstract The purpose of this project is to program an Erratic

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

3D Maps. Prof. Dr. Andreas Nüchter Jacobs University Bremen Campus Ring Bremen 1

3D Maps. Prof. Dr. Andreas Nüchter Jacobs University Bremen Campus Ring Bremen 1 Towards Semantic 3D Maps Prof. Dr. Andreas Nüchter Jacobs University Bremen Campus Ring 1 28759 Bremen 1 Acknowledgements I would like to thank the following researchers for joint work and inspiration

More information

REFINEMENT OF BUILDING FASSADES BY INTEGRATED PROCESSING OF LIDAR AND IMAGE DATA

REFINEMENT OF BUILDING FASSADES BY INTEGRATED PROCESSING OF LIDAR AND IMAGE DATA In: Stilla U et al (Eds) PIA07. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 36 (3/W49A) REFINEMENT OF BUILDING FASSADES BY INTEGRATED PROCESSING OF LIDAR

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Three-Dimensional Object Detection and Layout Prediction using Clouds of Oriented Gradients

Three-Dimensional Object Detection and Layout Prediction using Clouds of Oriented Gradients ThreeDimensional Object Detection and Layout Prediction using Clouds of Oriented Gradients Authors: Zhile Ren, Erik B. Sudderth Presented by: Shannon Kao, Max Wang October 19, 2016 Introduction Given an

More information

CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves

CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines Perceptual Grouping and Segmentation

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Improving Door Detection for Mobile Robots by fusing Camera and Laser-Based Sensor Data

Improving Door Detection for Mobile Robots by fusing Camera and Laser-Based Sensor Data Improving Door Detection for Mobile Robots by fusing Camera and Laser-Based Sensor Data Jens Hensler, Michael Blaich, and Oliver Bittel University of Applied Sciences Brauneggerstr. 55, 78462 Konstanz,

More information

Reconstruction of Polygonal Faces from Large-Scale Point-Clouds of Engineering Plants

Reconstruction of Polygonal Faces from Large-Scale Point-Clouds of Engineering Plants 1 Reconstruction of Polygonal Faces from Large-Scale Point-Clouds of Engineering Plants Hiroshi Masuda 1, Takeru Niwa 2, Ichiro Tanaka 3 and Ryo Matsuoka 4 1 The University of Electro-Communications, h.masuda@euc.ac.jp

More information

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Supplementary Material Introduction In this supplementary material, Section 2 details the 3D annotation for CAD models and real

More information

Acquiring Models of Rectangular 3D Objects for Robot Maps

Acquiring Models of Rectangular 3D Objects for Robot Maps Acquiring Models of Rectangular 3D Objects for Robot Maps Derik Schröter and Michael Beetz Technische Universität München Boltzmannstr. 3, 85748 Garching b. München, Germany schroetdin.tum.de, beetzin.tum.de

More information

Pedestrian Detection Using Multi-layer LIDAR

Pedestrian Detection Using Multi-layer LIDAR 1 st International Conference on Transportation Infrastructure and Materials (ICTIM 2016) ISBN: 978-1-60595-367-0 Pedestrian Detection Using Multi-layer LIDAR Mingfang Zhang 1, Yuping Lu 2 and Tong Liu

More information

Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz Universität Hannover

Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz Universität Hannover 12th AGILE International Conference on Geographic Information Science 2009 page 1 of 5 Graph-based Modeling of Building Roofs Judith Milde, Claus Brenner Institute of Cartography and Geoinformatics, Leibniz

More information

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data

Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Estimation of Camera Pose with Respect to Terrestrial LiDAR Data Wei Guan Suya You Guan Pang Computer Science Department University of Southern California, Los Angeles, USA Abstract In this paper, we present

More information

OBJECT detection in general has many applications

OBJECT detection in general has many applications 1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for

More information

Light Field Occlusion Removal

Light Field Occlusion Removal Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image

More information

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Supplementary Material

Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Supplementary Material Deep Supervision with Shape Concepts for Occlusion-Aware 3D Object Parsing Supplementary Material Chi Li, M. Zeeshan Zia 2, Quoc-Huy Tran 2, Xiang Yu 2, Gregory D. Hager, and Manmohan Chandraker 2 Johns

More information

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES D. Beloborodov a, L. Mestetskiy a a Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,

More information

SEMANTIC FEATURE BASED REGISTRATION OF TERRESTRIAL POINT CLOUDS

SEMANTIC FEATURE BASED REGISTRATION OF TERRESTRIAL POINT CLOUDS SEMANTIC FEATURE BASED REGISTRATION OF TERRESTRIAL POINT CLOUDS A. Thapa*, S. Pu, M. Gerke International Institute for Geo-Information Science and Earth Observation (ITC), Hengelosestraat 99, P.O.Box 6,

More information

Janitor Bot - Detecting Light Switches Jiaqi Guo, Haizi Yu December 10, 2010

Janitor Bot - Detecting Light Switches Jiaqi Guo, Haizi Yu December 10, 2010 1. Introduction Janitor Bot - Detecting Light Switches Jiaqi Guo, Haizi Yu December 10, 2010 The demand for janitorial robots has gone up with the rising affluence and increasingly busy lifestyles of people

More information

FOOTPRINTS EXTRACTION

FOOTPRINTS EXTRACTION Building Footprints Extraction of Dense Residential Areas from LiDAR data KyoHyouk Kim and Jie Shan Purdue University School of Civil Engineering 550 Stadium Mall Drive West Lafayette, IN 47907, USA {kim458,

More information

Beyond Bags of Features

Beyond Bags of Features : for Recognizing Natural Scene Categories Matching and Modeling Seminar Instructed by Prof. Haim J. Wolfson School of Computer Science Tel Aviv University December 9 th, 2015

More information

Renderer Implementation: Basics and Clipping. Overview. Preliminaries. David Carr Virtual Environments, Fundamentals Spring 2005

Renderer Implementation: Basics and Clipping. Overview. Preliminaries. David Carr Virtual Environments, Fundamentals Spring 2005 INSTITUTIONEN FÖR SYSTEMTEKNIK LULEÅ TEKNISKA UNIVERSITET Renderer Implementation: Basics and Clipping David Carr Virtual Environments, Fundamentals Spring 2005 Feb-28-05 SMM009, Basics and Clipping 1

More information

A Lightweight SLAM algorithm using Orthogonal Planes for Indoor Mobile Robotics

A Lightweight SLAM algorithm using Orthogonal Planes for Indoor Mobile Robotics Research Collection Conference Paper A Lightweight SLAM algorithm using Orthogonal Planes for Indoor Mobile Robotics Author(s): Nguyen, Viet; Harati, Ahad; Siegwart, Roland Publication Date: 2007 Permanent

More information

Robust Reconstruction of Interior Building Structures with Multiple Rooms under Clutter and Occlusions

Robust Reconstruction of Interior Building Structures with Multiple Rooms under Clutter and Occlusions Robust Reconstruction of Interior Building Structures with Multiple Rooms under Clutter and Occlusions Claudio Mura Oliver Mattausch Alberto Jaspe Villanueva Enrico Gobbetti Renato Pajarola Visualization

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

Segmentation

Segmentation Lecture 6: Segmentation 24--4 Robin Strand Centre for Image Analysis Dept. of IT Uppsala University Today What is image segmentation? A smörgåsbord of methods for image segmentation: Thresholding Edge-based

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based

More information

Real-Time Human Detection using Relational Depth Similarity Features

Real-Time Human Detection using Relational Depth Similarity Features Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

Manhattan-World Assumption for As-built Modeling Industrial Plant

Manhattan-World Assumption for As-built Modeling Industrial Plant Manhattan-World Assumption for As-built Modeling Industrial Plant Tomohiro Mizoguchi 1, Tomokazu Kuma 2, Yoshikazu Kobayashi 3 and Kenji Shirai 4 Department of Computer Science, College of Engineering,

More information

Exploiting Depth Camera for 3D Spatial Relationship Interpretation

Exploiting Depth Camera for 3D Spatial Relationship Interpretation Exploiting Depth Camera for 3D Spatial Relationship Interpretation Jun Ye Kien A. Hua Data Systems Group, University of Central Florida Mar 1, 2013 Jun Ye and Kien A. Hua (UCF) 3D directional spatial relationships

More information

CEng 477 Introduction to Computer Graphics Fall 2007

CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection Visible surface detection or hidden surface removal. Realistic scenes: closer objects occludes the

More information

Unwrapping of Urban Surface Models

Unwrapping of Urban Surface Models Unwrapping of Urban Surface Models Generation of virtual city models using laser altimetry and 2D GIS Abstract In this paper we present an approach for the geometric reconstruction of urban areas. It is

More information

Segmentation

Segmentation Lecture 6: Segmentation 215-13-11 Filip Malmberg Centre for Image Analysis Uppsala University 2 Today What is image segmentation? A smörgåsbord of methods for image segmentation: Thresholding Edge-based

More information

Extracting Indoor Map Data From Public Escape Plans On Mobile Devices

Extracting Indoor Map Data From Public Escape Plans On Mobile Devices Master Thesis Extracting Indoor Map Data From Public Escape Plans On Mobile Devices Georg Tschorn Supervisor: Prof. Dr. Christian Kray Co-Supervisor: Klaus Broelemann Institute for Geoinformatics, University

More information

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry

1. Introduction. A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry Submitted to GIM International FEATURE A CASE STUDY Dense Image Matching Using Oblique Imagery Towards All-in- One Photogrammetry Dieter Fritsch 1, Jens Kremer 2, Albrecht Grimm 2, Mathias Rothermel 1

More information

Semantic Interpretation of Mobile Laser Scanner Point Clouds in Indoor Scenes Using Trajectories

Semantic Interpretation of Mobile Laser Scanner Point Clouds in Indoor Scenes Using Trajectories 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 Article Semantic Interpretation of Mobile Laser Scanner Point Clouds in Indoor Scenes Using

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

6D SLAM with Kurt3D. Andreas Nüchter, Kai Lingemann, Joachim Hertzberg

6D SLAM with Kurt3D. Andreas Nüchter, Kai Lingemann, Joachim Hertzberg 6D SLAM with Kurt3D Andreas Nüchter, Kai Lingemann, Joachim Hertzberg University of Osnabrück, Institute of Computer Science Knowledge Based Systems Research Group Albrechtstr. 28, D-4969 Osnabrück, Germany

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

DETERMINATION OF CORRESPONDING TRUNKS IN A PAIR OF TERRESTRIAL IMAGES AND AIRBORNE LASER SCANNER DATA

DETERMINATION OF CORRESPONDING TRUNKS IN A PAIR OF TERRESTRIAL IMAGES AND AIRBORNE LASER SCANNER DATA The Photogrammetric Journal of Finland, 20 (1), 2006 Received 31.7.2006, Accepted 13.11.2006 DETERMINATION OF CORRESPONDING TRUNKS IN A PAIR OF TERRESTRIAL IMAGES AND AIRBORNE LASER SCANNER DATA Olli Jokinen,

More information

EECS490: Digital Image Processing. Lecture #20

EECS490: Digital Image Processing. Lecture #20 Lecture #20 Edge operators: LoG, DoG, Canny Edge linking Polygonal line fitting, polygon boundaries Edge relaxation Hough transform Image Segmentation Thresholded gradient image w/o smoothing Thresholded

More information

User-Guided Dimensional Analysis of Indoor Scenes Using Depth Sensors

User-Guided Dimensional Analysis of Indoor Scenes Using Depth Sensors User-Guided Dimensional Analysis of Indoor Scenes Using Depth Sensors Yong Xiao a, Chen Feng a, Yuichi Taguchi b, and Vineet R. Kamat a a Department of Civil and Environmental Engineering, University of

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information