3D Measurement of Transparent Vessel and Submerged Object Using Laser Range Finder

Similar documents
Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Three Dimensional Measurement of Object s Surface in Water Using the Light Stripe Projection Method

3-D Measurement of Objects in Unknown Aquatic Environments with a Laser Range Finder

Atsushi Yamashita, Akira Fujii and Toru Kaneko

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

Monocular Underwater Stereo 3D Measurement Using Difference of Appearance Depending on Optical Paths

Absolute Scale Structure from Motion Using a Refractive Plate

Ray tracing based fast refraction method for an object seen through a cylindrical glass

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

Precise laser-based optical 3D measurement of welding seams under water

Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera

Structured light 3D reconstruction

3-D Shape Measurement of Pipe by Range Finder Constructed with Omni-Directional Laser and Omni-Directional Camera

Measurements using three-dimensional product imaging

APPROACH TO ACCURATE PHOTOREALISTIC MODEL GENERATION FOR COMPLEX 3D OBJECTS

Calibration of Fish-Eye Stereo Camera for Aurora Observation

STEREO VISION AND LASER STRIPERS FOR THREE-DIMENSIONAL SURFACE MEASUREMENTS

Range Sensors (time of flight) (1)

Processing 3D Surface Data

Improving the 3D Scan Precision of Laser Triangulation

Inspection of Visible and Invisible Features of Objects with Image and Sound Signal Processing

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Registration of Moving Surfaces by Means of One-Shot Laser Projection

CLASSIFICATION FOR ROADSIDE OBJECTS BASED ON SIMULATED LASER SCANNING

Path and Viewpoint Planning of Mobile Robots with Multiple Observation Strategies

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

A Theory of Multi-Layer Flat Refractive Geometry

Experiment 3: Reflection

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

3D-2D Laser Range Finder calibration using a conic based geometry shape

Lecture Ray Model of Light. Physics Help Q&A: tutor.leiacademy.org

HIGH SPEED 3-D MEASUREMENT SYSTEM USING INCOHERENT LIGHT SOURCE FOR HUMAN PERFORMANCE ANALYSIS

specular diffuse reflection.

RECONSTRUCTION OF REGISTERED RANGE DATA USING GEODESIC DOME TYPE DATA STRUCTURE

Development of a Simulation Method of Three-Dimensional Ultrafine Processing by Femtosecond Laser Shunsuke Nabetani 1, a, Hideki Aoyama 1, b * 1, 2, c

Measurement of 3D Foot Shape Deformation in Motion

SUPPLEMENTARY INFORMATION

Multiple View Geometry

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Name Section Date. Experiment Reflection and Refraction

Control of Mobile Robot Equipped with Stereo Camera Using the Number of Fingers

Processing 3D Surface Data

AN AUTOMATIC 3D RECONSTRUCTION METHOD BASED ON MULTI-VIEW STEREO VISION FOR THE MOGAO GROTTOES

3D Modeling of Objects Using Laser Scanning

DESIGN AND EVALUATION OF A PHOTOGRAMMETRIC 3D SURFACE SCANNER

A 3-D Scanner Capturing Range and Color for the Robotics Applications

Transparent Object Shape Measurement Based on Deflectometry

NDT OF SPECIMEN OF COMPLEX GEOMETRY USING ULTRASONIC ADAPTIVE

Self-Localization and 3-D Model Construction of Pipe by Earthworm Robot Equipped with Omni-Directional Rangefinder

Mirror Based Framework for Human Body Measurement

3B SCIENTIFIC PHYSICS

A 100Hz Real-time Sensing System of Textured Range Images

4.5 Images Formed by the Refraction of Light

A High Speed Face Measurement System

3-D MAP GENERATION BY A MOBILE ROBOT EQUIPPED WITH A LASER RANGE FINDER. Takumi Nakamoto, Atsushi Yamashita, and Toru Kaneko

INTEREST OPERATORS IN CLOSE-RANGE OBJECT RECONSTRUCTION

Volume Measurement Using a Laser Scanner

Optimized Design of 3D Laser Triangulation Systems

AR Cultural Heritage Reconstruction Based on Feature Landmark Database Constructed by Using Omnidirectional Range Sensor

Condenser Optics for Dark Field X-Ray Microscopy

Generating 3D Meshes from Range Data

Lecture 7 Notes: 07 / 11. Reflection and refraction

Overview of Active Vision Techniques

Estimation of Reflection Properties of Silk Textile with Multi-band Camera

Noncontact measurements of optical inhomogeneity stratified media parameters by location of laser radiation caustics

Chapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.

Experiment 6. Snell s Law. Use Snell s Law to determine the index of refraction of Lucite.

arxiv: v1 [cs.cv] 28 Sep 2018

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Automatic Generation of Stereogram by Using Stripe Backgrounds

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Volume Measurement Using a Laser Scanner

Occlusion Detection of Real Objects using Contour Based Stereo Matching

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Japan Foundry Society, Inc. Application of Recent X-ray CT Technology to Investment Casting field. Kouichi Inagaki ICC / IHI Corporation

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

Reflection and Refraction of Light

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows

ULTRASONIC TESTING AND FLAW CHARACTERIZATION. Alex KARPELSON Kinectrics Inc., Toronto, Canada

Physics 1C, Summer 2011 (Session 1) Practice Midterm 2 (50+4 points) Solutions

Physics-based Vision: an Introduction

Multiple View Geometry

Geometrical Feature Extraction Using 2D Range Scanner

Investigation 21A: Refraction of light

Towards the completion of assignment 1

Experiments on Generation of 3D Virtual Geographic Environment Based on Laser Scanning Technique

At the interface between two materials, where light can be reflected or refracted. Within a material, where the light can be scattered or absorbed.

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Simultaneous Tele-visualization of Construction Machine and Environment Using Body Mounted Cameras

A Robust Two Feature Points Based Depth Estimation Method 1)

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

Optics Vac Work MT 2008

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Mapping textures on 3D geometric model using reflectance image

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

Advance Shadow Edge Detection and Removal (ASEDR)

How to Use the Luminit LSD Scatter Model

Variation of Refractive Index inside an Aerogel Block

Image Based Lighting with Near Light Sources

Transcription:

3D Measurement of Transparent Vessel and Submerged Object Using Laser Range Finder Hirotoshi Ibe Graduate School of Science and Technology, Shizuoka University 3-5-1 Johoku, Naka-ku, Hamamatsu-shi, Shizuoka 432-8561, Japan Atsushi Yamashita Department of Precision Engineering, The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan Toru Kaneko and Yuichi Kobayashi Department of Mechanical Engineering, Shizuoka University 3-5-1 Johoku, Naka-ku, Hamamatsu-shi, Shizuoka 432-8561, Japan Abstract Specimens of precious underwater creatures are preserved in transparent vessels with liquid. It is important to measure three-dimensional shape of submerged objects. When measuring submerged objects by image measurement, refraction of light should be considered. This problem can be solved by ray tracing, but the 3D coordinates of the vessel are necessary. In this paper, we propose a method of measuring both a vessel and an object by using a laser range finder (LRF) mounted on a manipulator. Because a transparent vessel is not completely transparent, a reflection from the vessel surface can be detected in addition to a reflection from the object. However, the captured image may have many reflections from various paths, then it is necessary to detect correct reflections. The proposed method solves this problem by examining a behavior of each reflection relating to the epipolar constraint when the posture of the LRF is changed. Experimental results show the effectiveness of the proposed method. Index Terms Underwater Image Measurement, Laser Range Finder, 3D-Measurement, Epipolar Constraint. I. INTRODUCTION Recently, digital archives of important objects (e.g. cultural assets or prehistoric site) are created by various measurement technology [1]. Although many objects are in the air, some objects are kept in liquid. For example, specimens of precious underwater creatures, such as abyssal fish, are preserved in transparent vessels filled with liquid. Creation of digital archives of submerged objects requires a measurement method in liquid environment without damaging objects. Ultrasonic measurement techniques find wide use in underwater sensing. However, to measure submerged objects in a vessel is difficult because of multi echoes. Image measurement methods can acquire position and color of measured object by using captured image. Those techniques are measurable at short distance, because attenuation of light through a liquid. 3D measurement methods of liquid environment (mainly underwater) have been proposed [2][3][4][5]. In this paper, to measure submerged object in transparent vessel we employ image measurement. To measure submerged object by image measurement, it is necessary to consider effects of refraction of light. A light from a submerged object to a camera passes through a liquid, a vessel (e.g. glass or acryl) and air. The refractive indices of those media differ. Therefore, the outside surface and the e c r f a u s r te a w (a) in water air water Fig. 1. Refraction effect. (b) in air inside surface of the vessel become refraction surfaces at which the direction of light is changed. The outside surface is a boundary of air and the vessel. The inside surface is a boundary of vessel and the liquid. An example of refraction effect is shown in Fig. 1. Fig. 1(a) is an image of a frog model in a vessel which is half filled with water, and Fig. 1(b) is an image without water. The appearances of those images under water surface differ by refraction of light. When a captured image is distorted by refraction, we cannot obtain accuracy in measurement. Li et al. proposed a method to measure submerged object by ray tracing to consider refraction of light [2]. This method determines the geometry of refraction boundary by using a calibration pattern. After calibration, this method measures submerged objects. When we use this method in order to measure an object in a vessel, shape and position of transparent surfaces (outside and inside surface of vessel) are required. Ikeda et al. proposed a method for 3D measurement of both a transparent vessel and a submerged object by using Laser Range Finder (LRF) which consists of a beam light laser and a camera mounted on a manipulator [4]. The measurement model of this approach is shown in Fig. 2. A part of irradiated light from the laser of LRF is reflected at the outside surface of the transparent vessel (this reflected light is hereinafter called as vessel reflaction in this papaer), and the majority of irradiated light penetrates the transparent vessel. The penetrated light is reflected at the submerged object (this reflected light is hereinafter called as object reflection in this paper). Both vessel reflection and object reflection are

irradiated light reflected light from outside surface of vessel reflected light from submerged object LRF Moving inside surface submerged object Image capturing outside surface laser camera liquid vessel air manipulator Candidate points detection Vessel reflection detection laser range finder(lrf) Fig. 2. Measurement model. All capturing Yes Vessel measuring and fitting No Submerged object measuring Fig. 4. Procedure flow. Fig. 3. Reflected lights through various paths. captured with the camera of LRF. In general, vessel reflection is very weak but detectable because a transparent vessel is not completely transparent. Therefore, both the transparent vessel and the submerged object can be measured by triangulation using reflections in captured images. The above method assumes that a captured image contain only vessel reflection and object reflection. However, reflections from various paths may exist in the image (Fig. 3). Then, this method makes a measurement error when a false reflection is chosen for triangulation. In this paper, we propose a method for 3D measurement of a submerged object in a transparent vessel of unknown shape by using the LRF. The method chooses the correct reflection in the image for triangulation by examining the behavior of the reflection made by posture change of the LRF. Then, it determines the 3D coordinates of the vessel surface, and obtains the shape of the object by ray tracing. II. OUTLINE The proposed method performs 3D measurement of the vessel and the submerged object by using the LRF mounted on the manipulator. The LRF, moving along the vessel surface, irradiates a laser beam onto the object through the vessel and captures an image of laser spots reflected at both the vessel surface and the object surface. The schematic diagram of the proposed method is the same as shown in Fig.2. A light beam emitted from the laser source is projected into the vessel. The beam is partly reflected at the vessel surface in the direction to the camera. The remaining light beam penetrates the vessel, refracted twice on the outside and the inside of the vessel surfaces, and reaches to the object. The light reflected at the object again reaches to the camera after the refraction on the inside and the outside of the vessel surface. The proposed method captures the image of reflections both at the vessel surface and the object surface, and measures the shapes of the vessel and the object. The procedure flow is shown in Fig. 4. First, the LRF is moved to a measurement position. Then, an image is captured with the camera. In the image, candidates for reflections from the vessel and the object are detected. Among the candidates, the reflection from the vessel is identified. These steps are repeated until it is executed for all the measurement positions. After acquisition of the all data of reflections, the method calculates the 3D coordinates of the vessel surface by triangulation, and executes model fitting. Finally, the shape of the submerged object is obtained by ray tracing. Fig.5 shows an example of movement of measurement position. III. PRINCIPLE OF MEASUREMENT A. Detection of candidate points The following describes a process to detect candidate points for vessel reflections and object reflections in a captured image with the camera. A method proposed in [4] employs image thresholding to detect candidate points. However, this technique is not appropriate, because the intensity of the vessel reflection is very unstable. Then, we employ a peak filter to detect candidate points, assuming that those points have local peak intensities in the image. The procedure of detection is shown in Fig. 6.

vessel Image conversion to grayscale X W Y W Z W Smoothing W LRF measurement position move with manipulator L Y L Z L X L Peek filtering Fig. 5. Image capturing with zigzag scanning. Labeling Captured images with the camera have RGB color information. A weighted averaging converts an RGB color image to a grayscale image which has intensity information by considering the color property of the laser. Prior to peak filtering, a noise of the converted grayscale image is decreased by smoothing. We employ Gaussian smoothing. The peak filter converts a grayscale image to a peak image which consists of peak pixel (1) and no peak pixel (0), as shown in the following. { 1 if I(u, v) = Imax (u, v), ψ (u, v) = 1 P (u, v) = 0 otherwise (1) where P (u, v) is a value of the peak image at attention pixel (u, v) whose intensity is I(u, v), I max (u, v) is the maximum intensity of peripheral pixels of (u, v), and ψ (u, v) is a factor to reduce the effect of noise in low intensities. { 1 if Imax (u, v) I ψ (u, v) = min (u, v) > T p (2) 0 otherwise where I min (u, v) is the mininum intensity of peripheral pixels and T p is a threshold. This filter determines a pixel as the peak pixel when its intensity is equal to the local maximum. All pixels in the captured image are executed this operations. It is possible that one peak pixel spreads to several pixels by saturation. Therefore, labeling is executed to concatenate peak pixels to one group in an image. We acquire the candidate points to calculate the center of the gravity of concatenated groups. B. Detection of vessel reflection To acquire a position of vessel reflection, we propose a method of identification with changing a posture of the LRF and amethof of reflaction tracking. Fig. 7 shows the procedure of identification and tracking of vessel reflaction. A vessel reflaction satisfies the epipolar constraint. Therefore, a position (u d, v d ) of the vessel reflection is restricted in the vicinity of the epipolar line with error tolerance T e as follows. v d (a e u d + b e ) < T e. (3) where a e and b e are parameters representing the epipolar line. Candidate point detection Fig. 6. Detection of candidate points. When two or more candidate points exist along the epipolar line, it is necessary to choose the vessel reflection among the candidate points. If we change the posture of the LRF, those candidate points move in the image (Fig. 8). The movement of the vessel reflection is restricted on the epipolar line. On the other hand, false candidate points may leave from the epipolar line. Accordingly, false candidate points can be removed with (3) by changing the posture of the LRF with the manipulator gradually. This process continues until a single candidate point is determined as the vessel reflection. Here, it is not efficient to identify vessel reflections by the above process frequently. The vessel reflection can be tracked from image to image because positions of laser irradiation on the vessel between two consecutive measurement are close to each other, if we assume the movement is very small and the surface of the vessel is smooth. Then, once the vessel reflection is identified in a measurement, we track the vessel reflection in the image of next measurement efficiently. However, the vessel reflection cannot be tracked when close points to the previous vessel reflection exist. Hence, we combine identification and tracking approaches. When a previous vessel reflection exists and tracking works, the tracked point is the vessel reflection. If tracking does not work, identification process is executed. C. Measurement of transparent vessel and model fitting Measurement of the vessel surface by triangulation begins with the measurement of the outside surface using the outside reflection points. From the 3D coordinates of measured point group, the method fits a approximating geometric model to the group. Then the method determines the normal vectors on the outside vessel surface from the approximated model, and estimates the 3D coordinates of points at the inside vessel surface by assuming that the thickness of the vessel is uniform. The approximating geometric model of the inside vessel surface is also determined. In the above triangulation, accuracy of the image coordi-

Image capturing Identification LRF movement to next position m 2 m 1 v l 2 v l1 p p l1 l 2 vl 3 n3 p l 3 liquid vessel n 2 n 1 air Image capturing Fig. 9. Ray tracing. success Fig. 8. Fig. 7. candidate point previous of position original posture Tracking Detection of vessel reflection. T e epipolar line permitted range failure except after changing posture Vessel reflection identifying with changing posture. nates of the vessel reflection is very important. However, there are some deviations between reflections and the epipolar line, because of noise. Then, we project the image coordinate of the reflection peak onto the epipolar line in the image, and execute correction in sub-pixel order by interpolating the peak position of approximated Gaussian distribution estimated by the intensity profie on the epipolar line [6]. D. Measurement of submerged object Measurement of the submerged object is realized by ray tracing [2] of the laser from the LRF and the camera image. Fig. 9 shows the geometry of ray tracing of the laser. This ray starts in the direction of v l1 = (α 1, β 1, γ 1 ) T from position p l1 which is the position of laser light source. Symbols n 1, n 2 and n 3 are refractive indices of air, vessel and liquid. The position p l2 and the normal vector m 1 = (λ 1, µ 1, ν 1 ) T at the intersection of the outside surface of the vessel are obtained by using the approximating model of the vessel surface. The direction vector v l2 = (α 2, β 2, γ 2 ) T of the ray in the vessel is calculated based on Snell s law as follows. where α 2 β 2 γ 2 κ 1 = = n1 n2 α 1 β 1 γ 1 1 (1 t 2 1 ) ( n1 n 2 + κ 1 λ 1 µ 1 ν 1, (4) ) 2 ( ) n1 t 1, (5) n 2 t 1 = (v l1 m 1 ). (6) The position p l3 and the normal vector m 2 = (λ 2, µ 2, ν 2 ) T at the intersection of the inside surface of the vessel are also determined by using the approximated model. The direction vector v l3 = (α 3, β 3, γ 3 ) T of the ray in the vessel is also calculated. where α 3 β 3 γ 3 κ 2 = = n2 n3 α 2 β 2 γ 2 1 (1 t 2 2 ) ( n2 n 3 + κ 2 λ 2 µ 2 ν 2, (7) ) 2 ( ) n2 t 2, (8) n 3 t 2 = (v l2 m 2 ). (9) These direction and normal vectors v l1, v l2, v l3, m 1 and m 2 are all unit vectors. Similarly, the ray of the camera is calculated by ray tracing. The intersection of the rays of the laser and the camera is determined as the 3D measurement point of the object. Generally, the two rays do not have an intersection because of noises, then the mid point of two rays which are nearest to each other is determined as the measurement point. Measurement of submerged object needs detection of correct reflection of the object in the image. If there are multiple candidate points in the image, a candidate point whose ray and the laser ray has a distance longer than a threshold is removed as a false reflection. Among the remaining candidate points, the candidate points that has the maximum intensity is determined as the object reflection. IV. EXPERIMENTS Fig. 10 shows the measurement system used for experiments. The manipulator had three axes for changing posture to identify vessel reflections. Posture changing was executed by unit of 1 degree in the range of ± 10 degrees. Specifications of the system are shown in Table I. The experiments were executed in a darkroom. TABLE I SPECIFICATIONS OF THE SYSTEM Camera resolution Wavelength of laser Manipulator movement 1024 768 pixel 635 nm (red color) 1 mm

manipulator camera laser candidate points epipolar line (a) original posture keep disappear (b) 1 degree tilt Fig. 10. axes of changing posture Measurement system. keep except (c) 2 degree tilt excepted points identified reflection (d) identification Fig. 12. Process of vessel reflaction identification. error of measured points Fig. 11. Glass vessel. (a) the proposed method (b) a simple method A. Ascertainment of vessel reflection detection A glass vessel (Fig. 11) was measured to ascertain a performance of vessel reflection detection. Table II shows a result. In many cases, vessel reflections were detected by tracking. In rare cases, detection was failed because the intensity of vessel reflection was too weak to capture with the camera. An example of detection process is shown in Fig. 12. A captured image in original posture (Fig. 12(a)) has 3 candidate points along the epipolar line. In the second posture (Fig. 12(b)), the candidate point at right side was excepted because it disappeared. In the third posture (Fig. 12(c)), the candidate point at left side was excepted because it moved off the epipolar line. Hence, the middle point was identified as the vessel reflection (Fig. 12(d)). A measurement result of the glass vessel is shown in Fig. 13. Fig. 13(a) is the result by our proposed method, and Fig. 13(b) is the result by a simple method. The result by a simple method has errors derived from detection of false reflections. On the other hand, the result of our method shows the correct shape of the glass vessel. TABLE II DETAIL OF VESSEL REFLECTION DETECTION Measurement position 100 40 (4000) points Frequency of identification process 27 Frequency of tracking process 3934 Frequency of detection failure 39 Fig. 13. Measurement of glass vessel. B. Evaluation of measurement accuracy The accuracy of our system was evaluated by measuring a trapezoid object in a cylindrical vessel filled with water. The cylindrical vessel is shown in Fig. 14 and the trapezoid object is shown in Fig. 15. Our system measured the planes (the middle, right and left) of the trapezoid object. The width of the middle plane (50 mm) and the angles of the middle plane between both side planes (30 deg.) were evaluated. Table III details the conditions of the measurement. Fig. 16 shows a measurement result of outside surface of the vessel in the coordinate system of the cylinder. The radius of the outside surface was estimated as 165.5 mm from this result by using the Nelder-Mead method [7]. The measurement result of the trapezoid object based on ray tracing through the estimated cylindrical model is shown in Fig. 17. It is shown in Fig. 18 that measured points are divided into three planes and there are two crossing lines of planes. The width of the middle plane was calculated as the distance between the crossing lines. In addition, the angle between the crossing lines was calculated to evaluate the parallelism. Those evaluation results are shown in Table IV. As the result, the accuracy of our measurement system was confirmed. The scale error was within 1 mm and the angle error was within 1 degree. C. Measurement of submerged object A submerged object which is a model of dolphin in Fig. 19 was measured instead of specimens. In this expreriment, the

divided planes 30 deg 50 mm Fig. 14. Fig. 15. Cylindrical vessel. 30 deg Trapezoid object. crossing lines TABLE III D ETAIL OF M EASUREMENT FOR ACCURACY T EST Measurement position Radius of cylindrical vessel Thickness of cylindrical vessel Material of cylindrical vessel Width of trapezoid object Angles of trapezoid object Refractive index 250 40 (10000) points 165.5 mm 5 mm acryl 50 mm 30 degree air:1.00, acryl:1.49, water:1.33 same vessel and liquid in Table III was used. The number of measurement positions is 250 70 (17500 points). Fig. 20 shows measured points of the submerged object. As the result, the shape of the dolphin is expressd in detail. V. C ONCLUSION We propose a method for 3D measurement of a transparent vessel and submerged objects by using the LRF mounted on a manipulator. The method makes it possible to measure objects in a transparent vessel of unknown shape. Experimental results show the effectiveness of the proposed method. As a future work, the 3D shape of a submerged object should be created by combining measurement results obtained from multiple viewpoints. Fig. 16. Measurement result of cylindrical vessel. Fig. 17. Measurement result of trapezoid object. Fig. 18. Evaluation of measurement result. TABLE IV E VALUATION RESULT OF TRAPEZOID Distance of crossing lines [mm] Angle of crossing lines [deg.] Angle of middle - right [deg.] Angle of middle - left [deg.] PLANE Measured value 49.5 0.4 29.8 30.5 Ground truth 50.0 0.0 30.0 30.0 R EFERENCES [1] Y. Okamoto, T. Oishi and K. Ikeuchi, Image-Based Network Rendering of Large Meshes for Cloud Computing, International Journal of Computer Vision, Vol.94, Issue 1, pp.12-22, August 2011. [2] R. Li, H. Li, W. Zou, R. G. Smith and T. A. Curran, Quantitative Photogrammetric Analysis of Digital Underwater Video Imagery, IEEE Journal of Oceanic Engineering, 22(2):364-375, April 1997. [3] A. K. Chong and P. Stanford, Underwater Digital Stereo-Observation Technique for Red Hydrocoral Study, Photogrammetric Engineering and Remote Sensing, Vol.68, No.7, pp.745-751, July 2002. [4] A. Yamashita, S. Ikeda and T. Kaneko, 3-D Measurement of Objects in Unknown Aquatic Environments with a Laser Range Finder, in Proceedings of the 2005 IEEE International Conference on Robotics and Automation, pp.3923-3928, April 2005. [5] A. Yamashita, R. Kawanishi, T. Koketsu, T. Kaneko and H. Asama, Underwater Sensing with Omni-Directional Stereo Camera, in Proceedings of the 2011 IEEE International Conference on Computer Vision Workshop (OMNIVIS2011), pp.304-311, November 2011. [6] A. Prokos, G. Karras and E. Petsa, Automatic 3D Surface Reconstruction by Combining Stereovision with the Slit-Scanner Approach, in Proceedings of the International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS), Vol.XXXVIII, Part 5, pp.505-509, June 2010. [7] J. A. Nelder and R. Mead, A Simplex Method for Function Minimization, Computer Journal, Vol.7, pp.308-313, 1965. Fig. 19. Model of dolphin. Fig. 20. Measurement result.