Self-calibration of telecentric lenses : application to bubbly flow using moving stereoscopic camera. S. COUDERT 1*, T. FOURNEL 1, J.-M. LAVEST 2, F. COLLANGE 2 and J.-P. SCHON 1 1 LTSI, Université Jean Monnet, 23 rue Michelon, 42 023 Saint- Etienne Cedex, France. 2 LASMEA, Université Blaise Pascal, 24 avenue des Landais, 63 177 AUBIERE Cedex, France. KEYWORDS: Main subject(s): advanced image processing, self-calibration, Fluid: hydrodynamics, Visualization method(s): shadow method, stereoscopic camera, moving camera, Other keywords: telecentric lenses, multi-phase flow, bubble, flow visualisation ABSTRACT : Using stereoscopic imaging techniques to measure structures in turbulent flows, a calibration of the cameras is needed to connect image units to real one. Generally, the calibration procedure is quite long and needs a huge translation stage to be introduced in the test section. The method proposed get rid of that translation stage. It uses the techniques of self-calibration. It consists in positioning the calibration target by hand (i.e. unknown positions). The self-calibrated stereoscopic system uses telecentric lenses for both cameras. It is applied to measure the different position in time of a bubble raising in a still water filled tank. The entire stereoscopic system is moving to follow the bubble. 1. Introduction The stereoscopic system described is used to follow a bubble in turbulent conditions and measure its position in time. The stereoscopic system is moving as it is mounted on a huge translation stage to follow the raise of the bubble. Firstly, a calibration of the cameras is needed to connect image units to real one. Generally, the calibration procedure is quite long as it needs to introduce a precise translation stage in the test facilities (Soloff et al. 1997, Oord 1997, Riou et al. 1998, Coudert et al. 2001, Coudert and Schon 2003), and then record images of a calibration target at different known positions, and at the end remove the stages. In many facilities, during these introduction and removal of a huge translation stage, the test section have to be both mounted and unmounted. The first aim of this paper is to remove the use of the calibration translation stage by using a selfcalibration algorithm of the system. This method uses the well known techniques of self-calibration * Corresponding author: Dr. Sébastien COUDERT, LTSI, UMR CNRS 5516 / Université Jean Monnet, 23 rue Michelon, 42023 Saint- Etienne Cedex, France email: coudert@univ-st-etienne.fr 1
developed in robotics research. It is applied here to stereoscopic image velocimetry. The second aim is to point out the problems encountered using moving imaging system. The main advantage of this method consists in positioning the calibration target by hand (i.e. random positions). The main draw-backs is that a model of the stereoscopic system have to be created. But, the method allows the introduction of non-linear functions in the modeling, such as Scheimpflug imaging, optical distortions of the lenses and transparent wall geometry of the facilities. The imaging system uses telecentric lenses for both camera of the stereoscopic system (Coudert et al. 2001, Fournel et al. 2000 and 2003). It is applied to measure the different position in time of a bubble raising in a still water filled tank. The entire stereoscopic system is moving to follow the bubble. The experimental setup allows a maximal camera speed of 1 m.s -1 on a distance of 0.5 m. In a first step, the self-calibration of the stereoscopic imaging system is done. The method consists in firstly positioning a 2D calibration target in front of the stereoscopic cameras in a few different 3D positions, secondly, these unknown positions are computed as well as the camera parameters. The optimised parameters include also the position of the dots on the 2D calibration target. For each target 3D position, the dot centre positions of the calibration target have to be determined with a high accuracy in the recorded images. In a second step, the images of the bubble that goes upwards are recorded by the moving stereoscopic cameras. The absolute positions and the 3D velocity vectors of the bubble can be reconstructed. Firstly, the experimental setup and its geometry is described in paragraph 2. Secondly the processed algorithms are depicted as well as its accuracy. Then, the results are discussed in paragraph 4. Finally, conclusion and prospects lay in the last paragraph. In a few paragraph, the text refers to an other paper in these proceedings (i.e. Coudert and al. reference F4055), as the experimental setup is almost the same. 2. Experimental setup and geometry The same experimental setup and geometry as for an other paper in these proceedings (i.e. Coudert and al. reference F4055) has been used, except that this time two identical cameras and lenses are used instead of only one in order to have a stereoscopic view of raising bubbles. The angle between both camera is 90. White screen and spot is also doubled in order to have equivalent shadow imaging of raising bubbles on both cameras. The whole setup lay on Fig. 1 : on the left side, the huge translation stage supports the moving cameras. The right camera is in the bottom right hand corner of the image, and the lenses of the left one lays in between the translation stage and the water tank. 2
Fig. 1 Global view of the experimental setup. A few bubble is raising from the orifice in the middle of the water tank. 3. Processing 3-1 Self-calibration Regarding to standard calibration, the self-calibration differs mainly by determining the calibration parameters and the positions of the calibration target at the same time (and, in a final stage, the position of the dots on the target, i.e. Lavest et al. 1999). As for standard calibration, the intrinsic camera parameters are computed (e.g. optical point, focal number, distortions; i.e. Coudert et al. 2001 Fournel et al. 2000 and 2003). But, as the positions of the calibration target are unknown, it have to be computed from different views of the same target. The mechanical 3D position of the target is also called extrinsic parameters. At least 6 calibration images are needed in order to fulfil the determination of the 6 space parameters (e.g. extrinsic parameters : 3 translations and 3 rotations). Different views of the calibration target (e.g. translations and rotations) are needed in the set of calibration images. A non linear optimisation algorithm is used (such as Levenberg-Marquart (i.e. More 1977)) to optimise the intrinsic and extrinsic parameters of the cameras. The optimisation is computed in the image plane 3
(e.g. CCD chip plane). The difference used is the one between the two positions of the recorded dot projection and the simulated position projection. On one hand, the recorded dot position is computed using a special image processing algorithm, which is much more precise compared to centroide of the dot image. One may refer to Fig. 2 and Lavest et al. 1999. On the other hand, the simulated position of the dot is computed from the optimised parameters at the computing step. This processing needs initial values for the parameters. In our case, optimised parameters are 2 sets of intrinsic parameters for the 2 cameras of the setup, 12 sets of extrinsic parameters for the relative position between the two cameras and the 12 views of the calibration target (e.g. 24 calibration images (sample on Fig. 3)) and 15 positions of the dots in the calibration target plane. The algorithm to find the position of the calibration target dot (e.g. Fig. 2) is 0.03 pixel RMS accuracy on the image. That corresponds to an accuracy of about 5 µm in space for the 3D points. 0 Y Z X 10 J 20 200 180 30 160 gl 40 40 140 50 0 10 20 30 40 50 I J 20 00 20 I 40 120 100 2.1 2.2 luminance gl max r 1 p = gl/ r gl min r 2 r 2.3 radius Fig. 2 Computation of a recorded dot centre using a model of luminance. 2.1 : A dot of the calibration target is assumed to be white on a black background. 2.2 : The grey level of this dots forms a 3D shape that is transformed to a single dimension space (e.g. from the ellipsoidal shape to circular model and finally from circular to 1D space). 2.3 : In this space, the model consists of a few geometrical parameters that fits the luminance (e.g. the grey levels denoted gl). 4
set 1 2 3 4 C1 calibration_00_013_000..tif calibration_00_014_000..tif calibration_00_018_000..tif calibration_00_019_000..tif C2 Fig. 3 Self-calibration of stereoscopic systems needs a few pairs of recorded images of the calibration target. For each pair, images recorded on right camera and respectively left camera, lay on the first line (labelled C1) and respectively on second line (labelled C2). Each position which is initially unknown, is determined using all sets by optimising the parameters of the stereoscopic system regarding to the position of the grid dots on the images. 3-2 Image processing The same image processing as for an other paper in these proceedings (i.e. Coudert and al. reference F4055) has been used. The main point to remember is that the detection accuracy of the bubble centre is estimated around 1.5 pixel RMS on the image. For both left and right images of the bubble at any time, the same image processing algorithm is computed (e.g. Fig. 4)). 5
set 1 1 2 2 C1 Bulle_000_072 image bin Bulle_000_074 image bin C2 Fig. 4 Set of two stereoscopic image pair recorded by the moving cameras, and the associated binary images from image processing. The bubble which goes upwards, have a different shape that is viewed from different 2 points of view. 3-3 Capturing system The same capturing system as for an other paper in these proceedings (i.e. Coudert and al. reference F4055) has been used. The main point to remember is that it gives a velocity accuracy around 1 % RMS. But, huge vibration of the whole system decreases this accuracy when the system is moved as the electronic controller try to compensate those vibrations. 3-4 Reconstruction The reconstruction of the bubble position is done using bubble image positions on both cameras and the camera parameters from self-calibration processing (e.g. Fig. 5). The 3D position is triangulated using the two rays coming from the centre of the bubble image (including the optical distortions) and passing through the centre of the optic for each camera (Figure 5.1). The bubble 3D position is the intersection of these two 3D rays (or, as it doesn't cross most of the time, the middle point of the segment between the two closest points of these lines). If this algorithm uses 0.1 pixel RMS accuracy on the image for the position of a 3D point projection, that would corresponds to an accuracy of about 10 µm in space. As announced in the paragraph 3-2, the position of a 3D point projection is assumed around 20 time less accurate. The algorithm described above is used to compute several relative position of the bubble in time. These positions are then processed to absolute positions by adding the absolute positions of the cameras from the capturing system (e.g. paragraph 3-3). In figure 5.2, the absolute position of a bubble in time is represented from a set of 59 recorded image pairs. 6
right ray left ray optical center Z Y CCD chip X left camera ZOOM right camera assumed line intersection 5 Z 0-5 20 15 X 10 5.1 5.2 Fig. 5 3D reconstruction of a bubble position : 5.1 : the 3D position of a bubble is reconstructed using 2 rays from left and right cameras; both rays (in continuous line) come from the centre of the bubble image (e.g. on the CCD chip) and the centre of the lenses. The intersection of these 2 lines in 3D space gives the position of the bubble. This intersection is assumed as the middle of the segment (denoted + on this figure) which is in between the closest points of both line. 5.2 : the reconstructed 3D positions of the bubble in time is represented by the continuous line using 59 image pairs processed by the reconstruction algorithm described above. 4. Results and discussion 4-1 Processing Most of the processing done are very accurate within less than 1 %, except for the determination of the position of the bubble on the image. This low accuracy of the processing doesn't come from the algorithm it self, but because of the shape of the bubble is moving as it raises in a turbulent flow. The accuracy which should be around 5 µm by the mean of calibration is moved to about 0.5 mm. 4-2 Moving cameras As previously said, the inaccuracy of the measurement system comes mainly from the turbulent phenomena, such as the distortion of the shapes of the bubble in time. Moreover, the moving system that carries the cameras, is sensitive to vibrations. Those vibrations add huge errors on absolute and relative positions (e.g. image position). It give also an other drawbacks, such as the adjustments of the cameras and lenses, that needed to be fixed very tightly. In the future, these kind of vibration should be suppressed by having a damping mechanical system : short carrying harms, steal cable to counterweight and perhaps a different electronic controller on the translation stage. 7
5. Conclusion and prospects Computer vision algorithms are appropriate for that kind of application. The original topic of selfcalibration is completed for a stereoscopic system using telecentric lenses. Image et 3D reconstruction processing give also good results. The second original topic of moving camera give many problem, such as rigidity of the stereoscopic system or vibrations of part of the system. The inaccuracy of the measurement system is due to the moving camera system. In the future, this system has to be improved. List of Movies Movie 1: Reconstructed 3D positions of a bubble (3155 kb) : the reconstructed path is rotated 360 times by step of 2 around Y axis. References Coudert S. and Schon J.-P.: Back-projection algorithm with misalignment corrections for 2D3C stereoscopic PIV. Measurement Science and Technology 12 pp. 1371-1381, 2001. Coudert, S., Fournier C., Bochard N., Fournel T. and Schon J.-P.: Corrections for misalignment between the laser sheet plane and the calibration plane : measurement in a turbulent round free jet using stereoscopic PIV with telecentric lenses. PIV'01 4th International Symposium on Particle Image Velocimetry, Göttingen, 2001. Fournel T., Coudert S. and Riou L.: Stereoscopic 2D3C DPIV with telecentric lenses : calibration and first results. Euromech 411 Colloquium, Rouen, 2000. Fournel T., Coudert S., Fournier C., and Ducottet C.: Stereoscopic Particle Image Velocimetry using telecentric lenses, Measurement Science and Technology 14 pp. 494-499, 2003. Lavest J.M., Viala M. and Dhome M.: Quelle précision pour une mire d'étalonnage?. Traitement du signal 16(3) pp. 241-254, 1999. More J. J.: The Levenberg-Marquardt Algorithm: Implementation and Theory. Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105-116, 1977. Oord, J. V., The design of a stereoscopic DPIV system. Delft (NetherLands), Laboratory for Aero & HydroDynamics: 1-50, 1997. Riou L., Fayolle J. and Fournel T.: PIV measurements using multiple cameras : the calibration method. 8th international symposium on flow visualization, Sorrento, 95.1-95.11, 1998 Soloff S. M., Adrian R. J. and Liu Z.-C.: Distortion compensation for generalized stereoscopic particle image velocimetry, Measurement Science and Technology 12 pp. 1441-1454, 1997. 8