Optical 3D Sensors for Real Applications Potentials and Limits Applications pratiques de capteurs optiques tridimensionnels: potentiel et limites Abstract Optical 3D-sensors measure local distances or the shape of surfaces, from the nanometer regime to the meter regime. Surprisingly, only three different physical mechanisms of signal formation are necessary to cover this range. These mechanisms determine different limits of the ultimate measuring uncertainty. We will discuss those limits of optical 3D-sensors and give rules to select the proper sensors for different applications. Les capteurs optiques tridimensionnels permettent de mesurer des distances locales ou la forme de surfaces, dans un domaine allant du nanomètre au mètre. Etonnamment, seulement trois principes physiques différents décrivant la formation du signal sont nécessaires pour couvrir ce domaine. Ces principes déterminent les limites de l incertitude de mesure qu il est possible d obtenir. Nous discuterons les limites des capteurs optiques tridimensionnels et donnerons quelques règles pour choisir le(s) capteur(s) approprié(s) pour différentes applications. Peter Klinger 1 peter.klinger@ physik.uni-erlangen.de Klaus Veit 1,2 veit@3d-shape.com Gerd Häusler 1 haeusler@physik.uni-erlangen.de Stefan Karbacher 2 karbacher@3d-shape.com Xavier Laboureux 1,2 laboureux@3d-shape.com 1 Chair for Optics University of Erlangen-Nuremberg Staudtstrasse 7/B2 D-91058 Erlangen Germany +49-9131-852-8372 kerr.physik.uni-erlangen.de 2 3D-SHAPE GmbH Henkestr.127 D-91052 Erlangen Germany +49-9131-977959-0 www.3d-shape.com
Introduction Most of the problems of industrial inspection, reverse engineering and virtual reality require data about the geometrical shape of objects in 3D space. Such 3D data offer advantages over 2D data: shape data are invariant against alteration of the illumination, soiling and object motion. Unfortunately, those data are much more difficult to acquire than video-data about the two-dimensional local reflectivity of objects. In our talk we will discuss the physics of 3D sensing, and will address the following subjects: different type of illumination (coherent or incoherent, structured or unstructured), interaction of light with matter (coherent or incoherent, at rough surfaces or at smooth surfaces), the consequences of Heisenberg's uncertainty relation. The knowledge of physical limits of the measuring uncertainty enables the design of optimal sensors that work at those limits and helps to judge available sensors. We will show that the vast number of known optical 3D sensors is based on only three different principles. The three principles are different in terms of how the measuring uncertainty scales with the object distance. We will further learn that with only two or three different sensors a great majority of problems of automatic inspection or virtual reality can be solved. We will not explain many sensors in detail, we will rather discuss the potentials and limitations of the major sensor principles for the physicist, as well as for the benefit of the user of optical 3D sensors: laser triangulation, phase measuring triangulation white light interferometry on rough surfaces As mentioned above, it turns out that with this set of sensors 3D data of objects of different kind or material can be acquired. The measuring uncertainty ranges from about 1 nanometer to a few millimeters, depending on the principle and on the measuring range. We will give examples of the potentials of each sensor, by examples of measured objects and by discussion of the physical and technological drawbacks. We will specifically address the interests of potential users of these sensors concerning the applicability to real problems. Here, we briefly explain the three principles. In laser triangulation systems we project a laser spot onto the surface under test, from a certain direction of illumination, and we observe the spot by a video line array, from a different direction of observation. The angle between both directions is called the angle of triangulation, see Figure 1. If the object distance changes, the lateral position of the spot image changes as well. With simple geometric calculations, we can evaluate the distance of the spot from its lateral position. There is a straight forward improvement of laser-spot-triangulation, by projecting a line, instead of a point. Figure 1: Principle of triangulation.
This is sometimes called laser sectioning, because an observing video camera sees a profile ( cross section ) of the surface under test. To acquire the entire surface, a one-dimensional scan of the laser line over the object is necessary. With phase measuring triangulation (PMT) we can further proceed from a line-sensor to an area-sensor that measures the shape z(x, y) of an entire surface patch, without any scanning. The basic idea is to project a grid pattern onto the object. If the object surface is curved, the camera observes curved grid lines. If we project sinusoidal patterns with different phase shifts, it can be shown that from at least three exposures we can derive the local phase of the grid image, and, hence, the distance of each object point (Figure 2). It is possible to project a perfect sinusoidal pattern with a binary mask, by using an astigmatic projection lens system (Figure 3). Figure 2: Sinusoidal fringes projected onto the surface of a half sphere observed from the camera s viewpoint. Figure3: Principle of astigmatic projection for phase measuring triangulation. The two principles discussed so far are based on triangulation. The third principle of our list is white light interferometry. Interferometry is essentially based on time-of-flight measurement, by interference of the object light wave with a reference light wave. Distance variation of the object will cause phase variation of the object wave. Since those phase variations can be measured with extreme accuracy (better than λ/1000), we can measure shape variations in the sub-nanometer regime. This, however, works only for optically smooth (polished) surfaces. For rough surfaces, a phase evaluation is impossible, since the object wave suffers from speckle noise, which means, the phase is arbitrary and does not contain information about the distance. Instead, we use the temporal coherence properties, to detect the time-of-flight of the object wave. We make use of the fact that the reference wave and the object wave display interference contrast only if the path length difference is smaller than the coherence length of the source (Figure 4). This gives us the possibility to measure the shape of macroscopic objects with an uncertainty of only one micrometer. It should be noted that in interferometric sensors illumination and observation are coaxial. Hence, we can look into narrow holes.
Figure 2: Principle of the coherence radar (left).the correlogram (right) shows the (temporal) interference pattern in one single speckle while scanning the object along the z-axis. The Physical Limits of Optical 3D Data Acquisition There are several reasons that limit the optical acquisition of 3D data. We developed a theory about the physical limits of optical 3D sensors [1]. According to this theory, there are only three different physical measuring principles for optical 3D sensors. They differ in the way the measuring uncertainty scales with the object distance. The best known and widely used sensors are based on triangulation (we called these sensors type I ). Their performance is limited by coherent noise. The measuring uncertainty scales with the square of the object distance. There is another class of sensors ( type II ): these sensors are based on white light interferometry on rough surfaces. The signal formation is quite complex and different from classical interferometry, that is why we gave this principle a new name coherence radar [2]. The coherence radar is characterized by the surprising feature that the measuring uncertainty does not at all scale with the object distance, but just with the surface roughness. Classical interferometry at smooth surfaces has a third type ( type III ) of scaling behavior [3]. It features optical averaging over the microtopography: the measuring uncertainty is proportional to the inverse off the standoff. Conclusion It is quite useful to understand the physical limits of optical 3D-sensors, because we can judge existing sensors whether they already reach the physical limits or if there is room for technical improvements (or if the advertising of sensors displays a better performance than what is allowed by physics). In our group, our ambition is to build sensors that reach those physical limits. Yet, it is not sufficient to know the physical limits, because there are a lot of non physical boundary conditions that may keep us away from reaching the physical limits. Such boundary conditions are, for example, specularly reflecting surfaces, volume scatterers, strongly tilted surfaces, large dynamical range of reflectivity, and moving objects (living people). Such boundary conditions require not only physical knowledge but careful choice and design of the technology. Our sensor based on phase-measuring triangulation [4] incorporated some measures to overcome the difficulties above, and is specifically fast for medical applications.
References [1] G. Häusler, P. Ettl, M. Schenk, G. Bohn, I. Laszlo. Limits of Optical Range Sensors and How to Exploit Them. In T. Asakura, ed., Trends in Optics and Photonics, ICO IV, Springer Series in Optical Sciences, Vol. 74, pp. 328-342, Springer Verlag Berlin, Heidelberg, New York, 1999. [2] T. Dresel, G. Häusler, H. Venzke. 3D-sensing of rough surfaces by coherence radar. Appl. Opt. 31, No. 7 (1992) pp. 919-925. [3] G. Häusler, M. B. Hernanz, R. Lampalzer, and H. Schönfeld. 3D Real Time Camera. In W. Jüptner and W. Osten, eds., Fringe '97, 3rd International Workshop on Automatic Processing of Fringe Pattern, 1997. [4] M. Gruber, G. Häusler. Simple, robust and accurate phase-measuring triangulation. Optik 89, No. 3 (1992) pp. 118-122.