Volume Measurement Using a Laser Scanner

Similar documents
Volume Measurement Using a Laser Scanner

ksa MOS Ultra-Scan Performance Test Data

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner

Processing 3D Surface Data

Point-based Simplification Algorithm

Measurements using three-dimensional product imaging

Processing 3D Surface Data

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

How to Measure Wedge. Purpose. Introduction. Tools Needed

A Procedure for accuracy Investigation of Terrestrial Laser Scanners

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Mode-Field Diameter and Spot Size Measurements of Lensed and Tapered Specialty Fibers

arxiv: v1 [cs.cv] 28 Sep 2018

3D Modeling of Objects Using Laser Scanning

Integrating the Generations, FIG Working Week 2008,Stockholm, Sweden June 2008

Evaluating Measurement Error of a 3D Movable Body Scanner for Calibration

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Range Sensors (time of flight) (1)

3D Measurement of Transparent Vessel and Submerged Object Using Laser Range Finder

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Sherlock 7 Technical Resource. Laser Tools

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING

Optimized Design of 3D Laser Triangulation Systems

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene

Validation of aspects of BeamTool

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

Overview of Active Vision Techniques

A Constrained Delaunay Triangle Mesh Method for Three-Dimensional Unstructured Boundary Point Cloud

ScienceDirect. The use of Optical Methods for Leak Testing Dampers

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D-2D Laser Range Finder calibration using a conic based geometry shape

Sizing and evaluation of planar defects based on Surface Diffracted Signal Loss technique by ultrasonic phased array

SIMPLE ROOM SHAPE MODELING WITH SPARSE 3D POINT INFORMATION USING PHOTOGRAMMETRY AND APPLICATION SOFTWARE

Developments in Dimensional Metrology in X-ray Computed Tomography at NPL

NDT OF SPECIMEN OF COMPLEX GEOMETRY USING ULTRASONIC ADAPTIVE

AUTOMATIC ORIENTATION AND MERGING OF LASER SCANNER ACQUISITIONS THROUGH VOLUMETRIC TARGETS: PROCEDURE DESCRIPTION AND TEST RESULTS

Measurement of 3D Foot Shape Deformation in Motion

EVALUATION OF SEQUENTIAL IMAGES FOR PHOTOGRAMMETRICALLY POINT DETERMINATION

Identification of Shielding Material Configurations Using NMIS Imaging

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

COMPACT RANGE CALIBRATION AND ALIGNMENT

Light and the Properties of Reflection & Refraction

A Survey of Light Source Detection Methods

NEW MONITORING TECHNIQUES ON THE DETERMINATION OF STRUCTURE DEFORMATIONS

Sensor Modalities. Sensor modality: Different modalities:

ARCHITECTURE & GAMES. A is for Architect Simple Mass Modeling FORM & SPACE. Industry Careers Framework. Applied. Getting Started.

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Simulation Supported POD Methodology and Validation for Automated Eddy Current Procedures

Absolute Scale Structure from Motion Using a Refractive Plate

Ray tracing based fast refraction method for an object seen through a cylindrical glass

Flexible Calibration of a Portable Structured Light System through Surface Plane

Digital Laminography and Computed Tomography with 600 kv for Aerospace Applications

A 100Hz Real-time Sensing System of Textured Range Images

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*

Polarization of Light

DESCRIBING FOREST STANDS USING TERRESTRIAL LASER-SCANNING

Miniaturized Camera Systems for Microfactories

Optimizing Monocular Cues for Depth Estimation from Indoor Images

A COMPREHENSIVE SIMULATION SOFTWARE FOR TEACHING CAMERA CALIBRATION

DAMAGE INSPECTION AND EVALUATION IN THE WHOLE VIEW FIELD USING LASER

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Stereo Vision. MAN-522 Computer Vision

Overview. Efficient Simplification of Point-sampled Surfaces. Introduction. Introduction. Neighborhood. Local Surface Analysis

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Surface Reconstruction. Gianpaolo Palma

FOR 474: Forest Inventory. Plot Level Metrics: Getting at Canopy Heights. Plot Level Metrics: What is the Point Cloud Anyway?

Shape from LIDAR Data. Univ. of Florida

Chapter 3. Physical phenomena: plane parallel plate. This chapter provides an explanation about how rays of light physically behave when

Natural Viewing 3D Display

Experiment 6. Snell s Law. Use Snell s Law to determine the index of refraction of Lucite.

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

3. Draw the orthographic projection (front, right, and top) for the following solid. Also, state how many cubic units the volume is.

A COMPETITION BASED ROOF DETECTION ALGORITHM FROM AIRBORNE LIDAR DATA

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

A High Speed Face Measurement System

REFINEMENT OF COLORED MOBILE MAPPING DATA USING INTENSITY IMAGES

Watertight Planar Surface Reconstruction of Voxel Data

3D Finite Element Software for Cracks. Version 3.2. Benchmarks and Validation

Structured light 3D reconstruction

A 3-D Scanner Capturing Range and Color for the Robotics Applications

Study on Gear Chamfering Method based on Vision Measurement

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Color Characterization and Calibration of an External Display

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

THE USE OF OPTICAL METHODS FOR LEAK TESTING DAMPERS

Generating 3D Meshes from Range Data

Understanding Variability

Geostatistics 3D GMS 7.0 TUTORIALS. 1 Introduction. 1.1 Contents

Chapters 1 7: Overview

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Chapter 3 Image Registration. Chapter 3 Image Registration

Learn the various 3D interpolation methods available in GMS

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Transcription:

Volume Measurement Using a Laser Scanner Xin Zhang, John Morris, and Reinhard Klette CITR, Computer Science Department The University of Auckland, Private Bag 92019, Auckland Abstract This article discusses ways of measuring accuracy of a laser range finder (LRF) using a specially designed calibration cube (with 3D calibration marks) in the context of a particular application (i.e., measuring volumes). We also compare (by means of experiments) two alternative volume estimation methods. Keywords: laser range finder, volume calculation, accuracy. 1 Introduction Accurate 3D shape reconstruction and volume estimation are important in many applications, for example, erosion studies, estimation of ore removed from a mine face and terrain assessment for construction [15]. Laser range finders (LRF), also called 3D laser scanners, are an attractive tool for these tasks because they remotely sense the surfaces delimiting the volume of interest and provide a dense cloud of measurements from which distances are derived directly in a single step (conversion of time to distance or phase to distance); see [8] for a review. Laser range finders measure distances to visible surface points: a typical scanning LRF system consists of a laser and steering optics which allow the beam to be scanned through a scene, receiving optics and electronics for measuring the time-offlight or phase difference. Usually, a conventional digital camera will be added to enable the operator to choose the area to be scanned and capture a color image of the scanned area. In this study, we set out to assess the ability of LRF systems to capture scene geometry from which volume estimates can be made and, in particular, the accuracy and repeatability of measurements and volumes derived from them. The error sources of an LRF include electrical noise, variations in the pulse shape and amplitude of reflected optical pulses and changes in the delay of optical and electrical signals [14]. Transmission through media with refractive indices greater than 1, (e.g. glass) will lead to incorrect depth measurements. The accuracy of an LRF may also be affected by temperature changes. An LRF will generally produce a dense point cloud with samples distributed evenly throughout the scene at spacing determined by the scanning parameters. However, it relies upon each sampled point in the scene having sufficient reflectivity to generate a detectable return pulse. Pulses that are too weak - either because of low reflectivity or because of attenuation in the optical path due to, for example, windows - result in holes in the resultant point cloud which need to be filled in some way. We have observed that, in a built environment, buildings with windows present a challenge for an LRF as the beam enters the window, is attenuated, reflects weakly inside and is attenuated further on exit. Since the window itself will sometimes produce an adequate reflection, data returned from windowed structures presents a difficult processing problem. Alexa et al. [1] give an example of using a Moving Least Square (MLS) projection procedure on oversampled points for point cloud rendering. They also evaluate the accuracy of an LRF. We investigated two techniques for processing the raw data received from the LRF system available to us (an HDS2500 from Leica [11]) and compared their accuracy. For calibration purposes, we constructed a precisely machined metal cube to which we could attach a variety of precisely machined targets which are dimensionally stable and whose geometry was specified. The cube is shown in Fig 1: it has three sides of 4.0mm Al plate, supported by a rigid internal framework. A grid of 6.0mm holes over the surface allows targets of various sizes and shapes (appropriate to the calibration exercise) to be attached. Stainless steel M 6 bolts attach the targets - allowing sufficient pressure to be applied to accurately align the targets with the metal base. The targets used in this study were composed of discs of 4.0mm Al plate with 20.0mm, 40.0mm and 50.0mm diameters. They were arranged in a

Figure 1: (Left) Calibration cube; (Right) Example of a target set of identical cylindrical pyramids : one example is shown in Fig 1(right). However, the cube design permits a wide variety of target shapes and sizes to be used: even the simple discs used here can be arranged to form a variety of cylindrical targets with different footprints (spatial extent) and step sizes. Additional shapes are easily added if necessary. We estimate that dimensions of the calibration cube are accurate to ±0.1mm - sufficiently higher than the quoted 6mm resolution of the HDS2500 scanner [11] so that the cube and its targets contribute negligible uncertainty to our measurements. 2 Accuracy evaluation Initially, we ran some simple trials to measure the accuracy and precision of the HDS2500. We arranged one of the calibration cube faces roughly perpendicular to the scanner s axis and about 4m from the scanner itself. We then selected a region with 12 targets in it - see Fig 2. Several scans of this region were made and the resulting dense point clouds were captured (see Fig 2, right). To measure random measurement noise, we selected regions in the point cloud that were well separated from the pyramidal targets and fitted a plane through all the points (7,372 points in this experiment). For convenience in subsequent processing, all points were rotated into a coordinate system based on the cube, in which the flat surface was described by z = c, where c is a constant. Fig 3 shows the standard deviation of the zcoordinate of points in the plane from the mean plane. This shows the standard deviation of random measurement noise, σ = 3.3mm, at z = 4m. A vertical scan through the centers of the targets on the calibration cube is shown in Fig 4 along with the expected profile showing the random errors in measured distances. Steps in the expected profile are 4mm high - the thickness of the discs forming the pyramids. Figure 2: (Left) View of the scene using the scanner s build-in camera, showing the scanned region; (Right) Reconstructed scene One problem with LRF measurements is illustrated in Fig 5 which shows two scans across a region of the calibration cube containing several targets. One target shows an anomalous return: we believe this is due to a reflection off the target. The cube is constructed from metal plates and standard bolts: they were not polished and thus present a surface which is fundamentally rough (i.e. has surface structure of dimensions of the order of the visible light used) and effectively scatters light. However, one beam reflected sufficiently strongly from the target (due to a locally polished - or optically smooth - region) and the return pulse shape was considerably distorted - probably saturating the receiving electronics. This in turn causes the threshold detection circuitry to stop the timer early and infer a shorter distance than the correct one. This highlights the importance of eliminating reflecting surfaces of any type from the scene when using the LRF technique. To extract the maximum accuracy from the equipment, times of flight shorter than the laser pulse width are measured. Thus the systems rely upon threshold detection circuitry to detect the same point in each returned pulse. 3 Volume estimation We used 12 of the layered pyramids (targets) to assess the ability of the device to accurately measure volumes - using two techniques to process the raw data. The pyramids are composed of three discs with radii: 25.0mm, 20.0mm and 10.0mm and uniform height of 4.0mm. Thus the volume of each calibration target is hπσr2 + vb 4.0 π (252 + 202 + 102 ) +1.7 102 = 1.46 104 mm3 = where vb = 1.7 102 mm3 is the volume of the bolt head.

Figure 3: Distribution of measured deviations from the mean plane Figure 6: Failure of basic Delaunay triangulation: number attached to points represent the order in which they are returned by a scanner. Sorting these points and triangulating with a greedy (nearest point) algorithm puts point 6 in the wrong position and creates a non-existent wedge in the triangulated output. Figure 5: Profiles of two lines, showing reflection anomaly The scan was set so that sampled points were separated by 5 mm at a distance of 4m. The point clouds acquired in this experiment contain 100 rows and 100 columns. Two categories of techniques for volume estimation from point clouds can be found: one based on reconstructed 3D models, the other based on isolated points. The first group is based on matched 3D shapes [4, 7, 16] or meshed surfaces [2, 5]. We did not have a predefined shape library necessary for matched shapes, so we were limited to considering meshed surfaces. 3.1 Meshed surface The first step in this technique is to reconstruct the target correctly. There are many well-studied interpolation methods such as Delaunay triangulation [9, 13] or (equivalently) Voronoi diagrams [3]. Since the point clouds from an LRF represent only the surface features of the target, connection of adjacent points is also suitable for target reconstruction. Because an LRF scanner scans in a single direction through the scene, it can - with certain types of structures in the scene - produce points which are apparently out of order in the x or y coordinates. This means that a basic Delaunay triangulation algorithm - which sorts points by x or y coordinates first, and then connects points with a greedy approach - will produce an Figure 7: Mesh reconstruction of a single target incorrect triangulation, see Fig 6. Thus the sorting step must be omitted and nearest neighbors - as returned by the scanner - must be connected. Variation of the basic Delaunay Triangulation can solve this problem - for an example, central projection [see Fig 6 (left)]- but the user must realize that it is necessary to use one of these variations. Fig 7 shows the triangulated reconstruction model of a single calibration target. The volume of the targets can be calculated by adding all the volumes formed by connecting each triangle to its projection onto the z = c plane. This method is simple, computationally efficient and filters noise effectively. 3.2 Isolated points If there are no missing points, the volume can also be calculated directly based on isolated points in point clouds. Each point can be regarded as the top of a truncated pyramid in 3D space. The height or z dimension of this top is simply the difference between the returned z value and any suitable reference plane as the bottom of the pyramid (here we use the furthest z plane). If the LRF scans

Figure 4: Observed and expected profiles along a horizontal scan Figure 8: Errors for individual targets a region with sufficient reflectivity (i.e. there are no holes in the point cloud) then the area on the reference plane, A of the pyramid bottom is simply the total scan area, A t, divided by the number of sample points: A = A t n The area of a pyramid top, A i, can be calculated by: A i = Z r h i A Z r where Z r is the z value of the reference plane. Suppose there is a bounding box which contains the target object from point clouds. The depth of the bounding box according to z-direction is H and the area on xy-plane is A t. The volume of the bounding box is V = A t H. When the number of points is n, the bottom area of each point A on xyplane is A and the top area is A i. So the volume can be calculated by the following equation: V = n i=0 ( A A i 3 + A i ) h i (1) The volume of each target can be estimated by considering only these points in the region of each target. After the pre-processing stage in which the scene was rotated so that the flat surface lies in the plane z = 0, the calculation becomes essentially trivial - addition of the z coordinate values and multiplication by the (constant) area of each voxel. The percent errors from the both methods of the calibration marks are shown in Table 1 (see also Fig 8). For this particular experiment, the errors are, not surprisingly, quite high. This experiment was designed to test the capability of the system to measure small volumes (relative to the σ = 3.3mm error in each distance measurement). Whilst this LRF can clearly detect a volume as small as 1 10 4 mm 3, it is unable to measure such a small volume with any accuracy. 3.3 Volume estimation - small cylindrical targets We observed that when the LRF scanned a circular hole in an object which had a flat base, it produced a cone-shaped volume instead of the expected cylinder. To characterize this behavior, we mounted a target with a set of circular holes of various diameters parallel to and a fixed distance above a flat surface. The surfaces were then set up perpendicular to the beam of the scanning laser. This should have produced a set of cylindrical volumes. The holes varied from 13mm to 1mm in diameter. The sampling space (resolution) was set to 1mm. The distance between the targets and the LRF was about 5.4m. From Fig 9 (right), we can see that the LRF cannot see into a hole with diameter less than 4mm - even when the scanning resolution is nominally only 1mm. This is due to the divergence (finite spread) of the imaging laser spot.- part of the laser beam is reflected from the surroundings of any hole < 4mm in diameter and it becomes invisible to the scanner. For holes > 4mm in diameter, part of the laser beam is still reflected from the material surrounding the hole - leading to the sloping edges and cone-shaped volumes seen in Fig 9 (right). These naturally underestimate the measured volume: note that the volume is still

Target index Meshed surface Isolated points 1 26.2 18.0 2 22.9 13.9 3-12.8-21.2 4 27.9 14.8 5 24.9 13.3 6-14.7-19.4 7-10.3-18.0 8 18.6 10.6 9 11.7 5.2 10 20.4 12.3 11 24.3 16.4 mean 12.7 4.2 Table 1: Percent errors for individual targets Actual Meshed surface Isolated points Volume ( 106 mm3 ) 21.1 20.3 20.7 Error(%) -4.1-1.8 Table 2: Percent error - large volume experiment and the middle part is farther away. The aim was to measure the volume of the central hole. Figure 9: Volume estimation accuracy for small cylindrical targets. Left: Original target; Right: Reconstructed model Figure 10: Error in volume estimation The distance between the target and the LRF is 3m. The area of the target object is about 0.15m2. Each scan consisted of 104 points. 3.5 The depth and width of the hole was determined by direct measurement of the surrounding boxes. We used the height of the scanned area as reported by the scanner as the third dimension, since this is dependent only on mechanical properties of the LRF s scanning system, which can measure angles to x arc-seconds. We estimate the reference volume as x 10y ± z mm3. Again, we used a meshed surface and isolated points methods to estimate the volume from the LRF scanned data with results shown in Table 2. 3.6 Figure 11: Large volume experiment severely affected by the beam spread even when the hole diameter is 13mm. The volume estimation is expected smaller than the real value because the reconstruction models are tapers instead of cylinders. The bigger the target, the more accurate the estimation of volume. 3.4 Large volume experiment An indoor experiment shown in Fig 11 was used to evaluate measurement of larger volumes. The target consisted of three paper box structures, arranged as shown in Fig 11 (right). The target object has 3 parts (left, middle and right). The surface of left and right parts are close to the LRF Large volume estimation Comparison of different methods Reconstruction and volume calculation are two main uses of point clouds from an LRF. If the LRF is scanning a scene in which every point has a minimum reflectivity, then it will produce a dense point cloud with 100% returns. Furthermore, the grid points in the scan will usually be evenly distributed throughout the scene. This means that the computationally trivial isolated points method can be used to calculate volumes. However, if the scene contains a significant number of low reflectivity points (e.g. windows in buildings), then the returned point cloud will have large gaps. In such cases, the simple isolated points calculation will tend to fill in the gaps with an average depth for the whole scene and thus introduce significant errors. For these cases, triangulation of the scene before volume calculation will produce better results as it will tend to interpolate across gaps in some reasonable manner, e.g. if there are no reflections at all from a windowed area, then a building volume would be calculated based on closed windows - a perfectly reasonable approximation.

4 Conclusion Use of laser range finders allows (essentially direct) recovery of distances in a scene and thus might be expected to be subject to smaller errors than other imaging techniques, which generally require more than one image and thus derive distance information from several actual measurements. Also, an LRF can work in the dark, whereas the optical reconstruction methods need ambient light. This study served to highlight some of the problems with LRF systems and measured the depth accuracy obtainable with one commercial system. One of the key constraints on the technique is the requirement for uniform reflectivity across a scene so that a minimum intensity return pulse is available. Some other techniques, such as stereo photogrammetry, can - in principle, at least - match up the black spots of low reflectivity. Furthermore, our experiments showed that points of high reflectivity can introduce a subtle error - one that cannot be easily detected in an unknown scene. The experiment on small holes showed that it is necessary to be aware of the actual laser beam divergence: for the instrument we used, significant errors were present even with a target diameter over 10mm. Thus LRF users need to examine scenes carefully for such potential traps. Unfortunately, the accompanying conventional camera image is of little help in automatically identifying problem areas as it is taken with slightly off-axis with respect to the laser scanner. Complex optics integrating the viewer and the scanner may solve this problem, but would add considerably to the cost of a system. References [1] M. Alexa, J. Behr, D. Cohen-Or, and S. Fleishman. Computing and Rendering Point Set Surfaces, IEEE Transactions on Computer Graphics and Visualization, Vol. 9, No.1, pages 3 15, 2003. [2] N. Amenta, M. Bern, and M. Kamvysselis. A new Voronoi-based surface reconstruction algorithm. In Proc. SIGGRAPH, pages 415 421, 1998. [3] F. Aurenhammer, R. Klein. Voronoi diagrams, Handbook of Computational Geometry, J.Sack, G.Urrutia (eds.), Chapter 5, pages 201 290, Elsevier Science Publishing, 2000. [4] G. Barequet and M. Sharir. Partial surface and volume matching in three dimensions. IEEE Trans. Pattern Analysis and Machine Intelligence, 19, pages 1 21, 1997. [5] G. Blelloch, G. Miller, and D. Talmor. Developing a practical projection-based parallel Delaunay algorithm. In Proc. Annual Symposium Computational Geometry, pages 186 195, 1996. [6] P. Heckbert and M. Garland. Optimal triangulation and quadric-based surface simplification, Computational Geometry Theory and Applications, pages 49 65, 14, 1999. [7] A. Johnson, R. Hoffman, J. Osborn, and M. Hebert. A system for semi-automatic modeling of complex environment, IEEE 3-D Digital Imaging and Modeling, pages 213 220, 3, 1997. [8] R. Klette and R. Reulke. Modeling 3D scenes: Paradigm shifts in photogrammetry, remote sensing and computer vision. Keynote, IEEE Int. Conf. Systems and Signals, Kaohsiung, Taiwan (8 pages, conference CD), 2005. [9] D. Lee and B. Schachter. Two algorithms for constructing a Delaunay triangulation, Computer and Information Sciences, pages 219 242, 9(3), 1980. [10] http://www.hds.leica-geosystems.com(last access: 14 Sep., 2005). [11] http://hds.leicageosystems.com/products/hds2500 specs.html (last access: 14 Sep., 2005). [12] M. Mettenleiter, F. Hartl, I. Heinz, and B. Neumann. Laser scanning and Modelling - Industrial and Architectural Applications. In Proceedings of the 6th Conference Optical 3-D Measurement Techniques, 2003. [13] P. Mücke, I. Saias, and B. Zhu. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations, Twelfth Annual Symposium on Computational Geometry, pages 274 283, 5, 1996. [14] G. Perchet and M. Lescure. Magnification of phase shift for a laser range finder: intrinsic resolution and improved circuit, Journal of Optics, Vol. 29, No. 3, pages 229 235, 1998. [15] T. Schulz and H. Ingensand. Influencing variables, precision and accuracy of terrestrial laser scanners, Ingeo 2004, Bratislava, 2004. [16] R. Veltkamp. Shape matching: similarity measures and algorithms. In Proc. IEEE Shape Modeling International Conference, pages 188 197, 2001.