SINGLE IMAGE ORIENTATION USING LINEAR FEATURES AUTOMATICALLY EXTRACTED FROM DIGITAL IMAGES

Similar documents
BUILDING MODEL RECONSTRUCTION FROM DATA INTEGRATION INTRODUCTION

AUTOMATIC IMAGE ORIENTATION BY USING GIS DATA

Feature Detectors - Canny Edge Detector

DETERMINATION OF CORRESPONDING TRUNKS IN A PAIR OF TERRESTRIAL IMAGES AND AIRBORNE LASER SCANNER DATA

Edge and local feature detection - 2. Importance of edge detection in computer vision

Effects Of Shadow On Canny Edge Detection through a camera

DESIGN AND TESTING OF MATHEMATICAL MODELS FOR A FULL-SPHERICAL CAMERA ON THE BASIS OF A ROTATING LINEAR ARRAY SENSOR AND A FISHEYE LENS

Topic 4 Image Segmentation

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB

VECTORIZATION, EDGE PRESERVING SMOOTHING AND DIMENSIONING OF PROFILES IN LASER SCANNER POINT CLOUDS

Detection of Edges Using Mathematical Morphological Operators

Towards the completion of assignment 1

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Image Processing

Multi-ray photogrammetry: A rich dataset for the extraction of roof geometry for 3D reconstruction

FOOTPRINTS EXTRACTION

An Algorithm for Blurred Thermal image edge enhancement for security by image processing technique

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4

Photogrammetry: DTM Extraction & Editing

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction

EVALUATION OF WORLDVIEW-1 STEREO SCENES AND RELATED 3D PRODUCTS

ELEC Dr Reji Mathew Electrical Engineering UNSW

A SIMPLIFIED OPTICAL TRIANGULATION METHOD FOR HEIGHT MEASUREMENTS ON INSTATIONARY WATER SURFACES

Optical Crack Detection of Refractory Bricks

A METHOD TO PREDICT ACCURACY OF LEAST SQUARES SURFACE MATCHING FOR AIRBORNE LASER SCANNING DATA SETS

Image processing and features

The Processing of Laser Scan Data for the Analysis of Historic Structures in Ireland

Processing 3D Surface Data

Processing 3D Surface Data

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Robotics Programming Laboratory

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

GRAPHICS TOOLS FOR THE GENERATION OF LARGE SCALE URBAN SCENES

Unwrapping of Urban Surface Models

AUTOMATIC ORIENTATION AND MERGING OF LASER SCANNER ACQUISITIONS THROUGH VOLUMETRIC TARGETS: PROCEDURE DESCRIPTION AND TEST RESULTS

RANSAC APPROACH FOR AUTOMATED REGISTRATION OF TERRESTRIAL LASER SCANS USING LINEAR FEATURES

Digital Image Processing. Image Enhancement - Filtering

BRUTE FORCE MATCHING BETWEEN CAMERA SHOTS AND SYNTHETIC IMAGES FROM POINT CLOUDS

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

LAS extrabytes implementation in RIEGL software WHITEPAPER

Processing 3D Surface Data

PERFORMANCE OF LARGE-FORMAT DIGITAL CAMERAS

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

APPROACH TO ACCURATE PHOTOREALISTIC MODEL GENERATION FOR COMPLEX 3D OBJECTS

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA

GENERATING BUILDING OUTLINES FROM TERRESTRIAL LASER SCANNING

BUNDLE BLOCK ADJUSTMENT WITH HIGH RESOLUTION ULTRACAMD IMAGES

Point clouds to BIM. Methods for building parts fitting in laser scan data. Christian Tonn and Oliver Bringmann

Trimble Engineering & Construction Group, 5475 Kellenburger Road, Dayton, OH , USA

AIT Inline Computational Imaging: Geometric calibration and image rectification

IMAGE-BASED 3D ACQUISITION TOOL FOR ARCHITECTURAL CONSERVATION

Local Image preprocessing (cont d)

TERRESTRIAL LASER SCANNER DATA PROCESSING

Subpixel Corner Detection Using Spatial Moment 1)

Experiments with Edge Detection using One-dimensional Surface Fitting

Edge detection. Gradient-based edge operators

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

SIMULATIVE ANALYSIS OF EDGE DETECTION OPERATORS AS APPLIED FOR ROAD IMAGES

Digital Image Processing Fundamentals

AUTOMATIC PHOTO ORIENTATION VIA MATCHING WITH CONTROL PATCHES

DENSE 3D POINT CLOUD GENERATION FROM UAV IMAGES FROM IMAGE MATCHING AND GLOBAL OPTIMAZATION

BUILDING DETECTION AND RECONSTRUCTION FROM AERIAL IMAGES

Document Image Binarization Using Post Processing Method

A 3-D Scanner Capturing Range and Color for the Robotics Applications

Edge Detection Lecture 03 Computer Vision

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

Biomedical Image Analysis. Point, Edge and Line Detection

EXTENDED GAUSSIAN IMAGES FOR THE REGISTRATION OF TERRESTRIAL SCAN DATA

DECONFLICTION AND SURFACE GENERATION FROM BATHYMETRY DATA USING LR B- SPLINES

Building Roof Contours Extraction from Aerial Imagery Based On Snakes and Dynamic Programming

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report

The raycloud A Vision Beyond the Point Cloud

Calibration of IRS-1C PAN-camera

Anno accademico 2006/2007. Davide Migliore

3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS

Integrating the Generations, FIG Working Week 2008,Stockholm, Sweden June 2008

DERIVING PEDESTRIAN POSITIONS FROM UNCALIBRATED VIDEOS

3D MODELING OF HIGH RELIEF SCULPTURE USING IMAGE BASED INTEGRATED MEASUREMENT SYSTEM

Chapter 3 Image Registration. Chapter 3 Image Registration

A threshold decision of the object image by using the smart tag

EE795: Computer Vision and Intelligent Systems

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*

A Summary of Projective Geometry

AUTOMATIC DRAWING FOR TRAFFIC MARKING WITH MMS LIDAR INTENSITY

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

EVOLUTION OF POINT CLOUD

Edge-Preserving Denoising for Segmentation in CT-Images

MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

AUTOMATIC GENERATION OF DIGITAL BUILDING MODELS FOR COMPLEX STRUCTURES FROM LIDAR DATA

Filtering Images. Contents

AN INVESTIGATION INTO IMAGE ORIENTATION USING LINEAR FEATURES

2D image segmentation based on spatial coherence

Mapping textures on 3D geometric model using reflectance image

Feature Detectors - Sobel Edge Detector

Image features. Image Features

Improving the 3D Scan Precision of Laser Triangulation

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE

Transcription:

SINGLE IMAGE ORIENTATION USING LINEAR FEATURES AUTOMATICALLY EXTRACTED FROM DIGITAL IMAGES Nadine Meierhold a, Armin Schmich b a Technical University of Dresden, Institute of Photogrammetry and Remote Sensing b kubit GmbH, Fiedlerstraße 36, 137 Dresden Email: Nadine.Meierhold@tu-dresden.de, armin.schmich@kubit.de Abstract: As a first step for an automatic image orientation the paper will present a method for line extraction from digital image data. The extracted lines are used to orient an image to a laser scanner point cloud of the same scene. The image-line extraction consists of two parts an automatic one and an interactive one: At first lines are detected, vectorised and afterwards visualised in the software product PointCloud. Since there may be some extracted lines, which are not eligible for image orientation, in the second step the user selects only appropriate features and measures corresponding 3D lines in the point cloud. The implemented method was evaluated with two different data sets taken from building facades. Thereat, the orientations of two images were determined involving either point features, manually measured or automatically extracted image-lines. 1. Introduction The image orientation is a main task in photogrammetric applications. For single images it can be performed by spatial resection using control points, which are known in object space and measured in image space. For the purpose of a combined interpretation of digital images and laser scanner data, the geometric referencing of these data is necessary. This requires control features, which are known in the reference system of the laser scanner. Concerning an automatic process for single image orientation, the usage of linear features is more applicable than using point features because the extraction of straight lines from 3D point clouds may be considered to be easier than the extraction of discrete points. A comparison of two different methods for line-based image orientation already was shown in Meierhold et al. [5]. Based on the arrived conclusions, the line-based image orientation in this paper is realised by an approach suggested by Schenk [8]. The image orientation using this approach is based on straight object lines, which are defined in the laser scanner point cloud and described by line equations, and on points on the corresponding image lines. Because of this point-to-line correspondence, the line extraction method has to extract image-lines which are described by single points instead of line equations. The image orientation as well as the associated user interaction is carried out entirely in the environment of AutoCAD and its runtime extension PointCloud, which is a software product of the company kubit GmbH. The line extraction method, which will be described in the next section, and the line-based image orientation were implemented and tested with two different data sets. The results will be shown in section 3.

2. Line extraction method One important step for an automatic image orientation is the automation of the feature extraction. The automatic part of the implemented method for line extraction consists of three main steps (Fig. 1), which are presented in the next sections more in detail: Figure 1. Workflow of image-line extraction (left to right): inverted edge map, remaining regions after elimination, basic regions for vectorisation after refinement The aim of the line extraction is to find long edges with unchanging sign of curvature. For the purpose of image orientation, only these lines are eligible which can also be located in the 3D point cloud (e.g. physical object boundaries). Because edge detection methods only localise discontinuities in the image signal, which can be caused by many effects (e.g. textures or shadows), there are detected many ineligible edges. The automation of a selection algorithm seems rather to be more complex. Hence, the selection of appropriate line features is realised interactively at this state of work. 2.1. Edge detection There are a lot of methods for the detection of edge pixels from images. For the implemented method a Canny edge detector was chosen [3]. At first the gradient magnitude and gradient direction is calculated by convolving the smoothed grey scaled image with the Sobel masks of Eq. 1. 3 = 1 3 3 1 3 x S y 3 = 3 1 1 3 3 S (1) Subsequently, the Canny edge detector contains the thinning of the gradient magnitude array by a non-maximum suppression and the detection and linking of edges applying the two thresholds T 1 and T 2 (hysteresis). The result is a binary edge map depicting white edge pixels on a black background. 2.2. Edge selection and refinement To get connected edge regions, the edge map is processed with a connected component analysis using a 8-connectivity. Because digital images contain a lot of information, there are many small edge regions (e.g. caused by ornaments or textures) which are not eligible for image orientation. Such regions are eliminated in the next step by analysing the axially parallel bounding box of each region. If the diagonal of the bounding box is smaller than a preset value C 1, the region is eliminated.

Figure 2. Refinement of an edge region: Region with black coloured break pixels (left) and revised region (right) To revise the regions and to separate different line parts included in one region, a postprocessing step analyses the changing edge direction for each pixel. For each pixel the connected pixels in a 9x9 neighbourhood are used to adjust a straight line. The obtained line direction is assigned to the current pixel. Afterwards, for each pixel the variation of the direction is calculated by comprising the directions of the edge pixels in a 3x3 neighbourhood to find abrupt changes in edge direction. Pixels with a standard deviation above a threshold R 1 are marked (Fig. 2 left) and the connectivity analysis is started again without the marked pixels (Fig. 2 right). 2.3. Vectorisation of edge regions Because the image-lines have to be described by a sequence of points, the idea for the vectorisation is to simplify the edges describing them by parts of straight lines. This can be realised by a curve fitting algorithm independently suggested by Ramer [7] and Douglas & Peucker [4]. This top-down approach begins with an initial line which connects the start and end point of a pixel chain. Then the line is recursively divided at the pixel with the largest distance to the line until all pixel distances under-run a predefined approximation error ε. Figure 3. Calculation of azimuth for direction condition This basic algorithm is customised by three conditions while the first one influences the functionality of the algorithm and the other two evaluate the detected line parts or polylines before saving. Direction condition: This condition is implemented to extract only lines with an unchanging sign of curvature. Before each splitting step, the azimuths from the start to the end pixel (Fig. 3 α 1 ) and from the start pixel to the potential break pixel (Fig. 3 α 2 ) are calculated. The sizes of the azimuths define on which side of the initial line the break pixel resides. At the first splitting step the direction of the break pixel is memorised and all further break pixels have to reside on the same side. Connectivity condition: Before a line part is stored, it is checked if the pixels assigned to this line part are connected. By this connection, a gap of at most two pixels is allowed.

Length condition: After vectorisation of the entire edge region, the length of the resulting polyline is checked. It is calculated by summing up the length of the several line parts. If the total length is smaller than the minimum edge length already used in section 2.2, the polyline is rejected. In the case the polyline only consists of two vertices, the resulting 2D line is obtained by least squares adjustment involving all edge pixels whose distance to the line is below ε. 3. Results 3.1. Data sets Two different data sets of building facades are chosen for evaluating the presented method for image-line extraction and the following image orientation based on the extracted lines. The point clouds of both datasets were acquired with the terrestrial laser scanner Riegl LMS Z- 42i. For the Nymphenbad, which is a part of the historical Zwinger in Dresden, the point cloud has a total of three Million points with a point spacing from 1.4 to 13 mm. The part of the point cloud corresponding to the chosen image data of the second data set, taken from the Kommode in Berlin, comprises one million points. Because of a large scanning distance the point spacing varies from 12 mm to 35 mm. The image of the Nymphenbad was acquired with a 6 Mpix mirror reflex camera with 7.8 µm pixel size. For the acquisition of the image data of the second data set, also a 6 Mpix mirror reflex camera was used. With 5.4 µm, it has a smaller pixel size. Additionally, signalised and natural control points were obtained on the facade of the Kommode using a reflectorless measuring total station. 3.2. Line Extraction The presented line extraction method was applied to both images using the preset values summarised in the following table: parameter Nymphenbad Kommode T 1.4*T 2.4*T 2 T 2 15 14 C 1 7 pixel 6 pixel R 1 2 deg 2 deg ε 1 pixel 1 pixel Table 1. Preset values for processing the data sets A total of about 12 lines were extracted from the image of the Kommode and 92 lines from the Nymphenbad image. From Fig. 4 is obvious that there are extracted many long lines which correspond to object boundaries so that they can also be located in the 3D point cloud. This is a basic prerequisite for the image orientation. Furthermore, it can be seen (e.g. in the area of the bicycles or columns) that the algorithm is able to vectorise curved edges and to separate edges situated at close quarters. Disadvantagous for the image orientation concerning the assignment between 2D and 3D features is the fact that partly more than one polyline was extracted for one object edge. These collinear lines have to be merged. At this moment, it is realised interactively in the CAD environment. However, Boldt et al. [1] and Mohan & Nevatia [6] suggest algorithms for the automation of this step.

Figure 4. Detail of the Nymphenbad image (left) and of Kommode image (right) overlaid with extracted lines (black) 3.3. Image orientation The extracted lines were imported in PointCloud, where appropriate features were selected for image orientation and the corresponding object lines were measured in the point cloud. For both datasets three image orientations were calculated independently using different types of features. The first one was a classical photo resection, whereat 11 signalised and 9 natural control points were used for the data set Kommode and 22 natural points were used for the dataset Nymphenbad, which were manually measured in the image and in the point cloud. The line-based image orientation was performed by involving manually measured 2D lines on the one hand (method Line1) and automatically extracted 2D lines on the other hand (method Line2) 1. The standard deviations resulting from the image orientations are summarised in Tab. 2. Additionally to the exterior orientations of the images, the interior orientations of the cameras and the radial distortion parameters A 1 and A 2 by Brown [2] were determined simultaneously for both data sets. Nymph. method internal standard deviations of unknowns s [pixel] s X s Y s Z s ω s ϕ s κ s c s x s y [m] [m] [m] [ ] [ ] [ ] [mm] [mm] [mm] Point 1.5.4.8.3.63.61.64.17.16.29 Line1 1.8.11.36.7.94.8.9.39.26.48 Line2.53.8.53.18.61.52.59.52.25.41 Point 2.3.278.34.66 1.69 1.235 1.25.392.255.26 Line1 3.13.341.55.65 1.47 1.485 1.427.51.29.281 Line2 3.6.296.46.63 1.231 1.112 1.191.431.248.257 Komm. Table 2. Standard deviations of applied spatial photo resections Comparing both data sets, it is noticeable that the precision of the image orientation is better for the Nymphenbad data set. Reasons can be more depth information to solve correlations between the exterior and interior orientation parameters, a better distribution of the features in the image or a better accuracy of the object features caused by a smaller point spacing. Comparing the several orientation methods, it is obvious that the point-based photo resection performs only marginally better than using linear features. In case of the Nymphenbad, it is primarily caused by the fact that the object points were measured in the point cloud directly. 1 Nymphenbad : 16 manual and 23 automatic lines; Kommode : 28 manual and 33 automatic lines

Hence, the accuracy of the object points depends on the point spacing of the point cloud: with an increasing point spacing, the accuracy of the object points decreases. From the almost identical or slightly better precisions of method Line2 compared to Line1, one can conclude that the automatically extracted image lines are eligible for the purpose of image orientation. The marginally better precision primarily results from the higher redundancy caused by the fact that the automatically extracted polylines often consist of more than two image points. 4. Conclusions The presented method is able to extract lines from digital images which can also be located in the 3D point cloud and are therefore eligible for the purpose of image orientation. The comparison to interactively measured image-lines, performed with two different data sets, showed that almost identical or marginally better orientation results are obtainable with automatic extracted lines. To increase the accuracy, the line extraction method can be extended to a sub-pixel accurate line extraction. The ability of the presented method to extract curved lines is advantageous concerning the processing of undistorted images, where straight object lines are imaged as curved lines. Future work will deal with an automatic selection of appropriate lines from the total of extracted image-lines and with the automation of feature extraction from the 3D point cloud. Acknowledgements The research work is funded by resources of the European Fund of Regional Development (EFRE) and by resources of the State Saxony within the technology development project CAD-basierte bildgestützte Interpretation von Punktwolken. The used data set of the Kommode was acquired in cooperation with the Brandenburg University of Technology in Cottbus. References: [1] Boldt, M., Weiss, R. & Riseman, E.: Token-based extraction of straight lines, IEEE Transactions on Systems, Man, and Cybernetics, 19 (6), pp. 1581-1594, 1989 [2] Brown, D.C.: Close-range camera calibration, Photogrammetric Engineering, 37 (8), pp. 855-866, 1971 [3] Canny, J.: A computational approach to edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, 8 (6), November 1986, pp. 679-698, 1986 [4] Douglas, P. & Peucker, T.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature, Cartographica, 1 (2), pp. 112-122, 1973 [5] Meierhold, N., Bienert, A. & Schmich, A.: Line-based referencing between images and laser scanner data for image-based point cloud interpretation in a CAD-environment, International Archives of Photogrammetry, Remote Sensing and Spatial Information Science, Vol. 37, part B5, WG V/3, pp. 437ff, 28 [6] Mohan, R. & Nevatia, R.: Segmentation and description based on perceptual organization, Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (1989), San Diego, CA, pp. 333-341, 1989 [7] Ramer, U.: An iterative procedure for the polygonal approximation of the plane curves, Computer Graphics and Image Processing, 1 (3), pp. 244-256, 1972 [8] Schenk, T.: From point-based to feature-based aerial triangulation, ISPRS Journal of Photogrammetry and Remote Sensing, 58 (24), pp. 315-329, 24