Towards specification, planning and sensor-based control of autonomous underwater intervention

Similar documents
Advances in the Specification and Execution of Underwater Autonomous Manipulation Tasks

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 3: Forward and Inverse Kinematics

Vision-based Localization of an Underwater Robot in a Structured Environment

APPLICATIONS AND CHALLENGES FOR UNDERWATER SWIMMING MANIPULATORS

Combining Template Tracking and Laser Peak Detection for 3D Reconstruction and Grasping in Underwater Environments

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm

Introduction to Robotics

Matlab Simulator of a 6 DOF Stanford Manipulator and its Validation Using Analytical Method and Roboanalyzer

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Workspace Optimization for Autonomous Mobile Manipulation

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object.

10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators

Robot Localization based on Geo-referenced Images and G raphic Methods

Forward kinematics and Denavit Hartenburg convention

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

EEE 187: Robotics Summary 2

Basilio Bona ROBOTICA 03CFIOR 1

Industrial Robots : Manipulators, Kinematics, Dynamics

Kinematics. Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position.

Chapter 1: Introduction

Robot Vision Control of robot motion from video. M. Jagersand

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector

Robot mechanics and kinematics

MEM380 Applied Autonomous Robots Winter Robot Kinematics

INSTITUTE OF AERONAUTICAL ENGINEERING

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Robot mechanics and kinematics

Automatic Control Industrial robotics

[2] J. "Kinematics," in The International Encyclopedia of Robotics, R. Dorf and S. Nof, Editors, John C. Wiley and Sons, New York, 1988.

Towards Autonomous Robotic Valve Turning

Increasing the Autonomy Levels for Underwater Intervention Missions by using Learning and Probabilistic Techniques

UNMANNED UNDERWATER VEHICLE SIMULATOR ENABLING THE SIMULATION OF MULTI- ROBOT UNDERWATER MISSIONS WITH GAZEBO

New shortest-path approaches to visual servoing

Cecilia Laschi The BioRobotics Institute Scuola Superiore Sant Anna, Pisa

Survey on Visual Servoing for Manipulation

Robust Controller Design for an Autonomous Underwater Vehicle

Serial Manipulator Statics. Robotics. Serial Manipulator Statics. Vladimír Smutný

Simulation of Articulated Robotic Manipulator & It s Application in Modern Industries

LUMS Mine Detector Project

MTRX4700 Experimental Robotics

SCREW-BASED RELATIVE JACOBIAN FOR MANIPULATORS COOPERATING IN A TASK

arxiv: v1 [cs.cv] 28 Sep 2018

3. Manipulator Kinematics. Division of Electronic Engineering Prof. Jaebyung Park

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction

600M SPECTRE FIBEROPTIC ROV SYSTEM

Table of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE

Last 2 modules were about. What the other robot did : Robotics systems and science Lecture 15: Grasping and Manipulation

MDP646: ROBOTICS ENGINEERING. Mechanical Design & Production Department Faculty of Engineering Cairo University Egypt. Prof. Said M.

Kinematics - Introduction. Robotics. Kinematics - Introduction. Vladimír Smutný

The Underwater Swimming Manipulator A Bio-Inspired AUV

Functional Architectures for Cooperative Multiarm Systems

Robotics kinematics and Dynamics

Theory of Robotics and Mechatronics

Inverse Kinematics Software Design and Trajectory Control Programming of SCARA Manipulator robot

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

Inverse Kinematics of 6 DOF Serial Manipulator. Robotics. Inverse Kinematics of 6 DOF Serial Manipulator

1. Introduction 1 2. Mathematical Representation of Robots

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

CMPUT 412 Motion Control Wheeled robots. Csaba Szepesvári University of Alberta

ME5286 Robotics Spring 2013 Quiz 1

KINEMATIC ANALYSIS OF 3 D.O.F OF SERIAL ROBOT FOR INDUSTRIAL APPLICATIONS

Self-calibration of a pair of stereo cameras in general position

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

Lecture Note 6: Forward Kinematics

Design of a Three-Axis Rotary Platform

Task selection for control of active vision systems

A comparison between Position Based and Image Based Visual Servoing on a 3 DOFs translating robot

A Framework for Compliant Physical Interaction based on Multisensor Information

Published Technical Disclosure. Camera-Mirror System on a Remotely Operated Vehicle or Machine Authors: Harald Staab, Carlos Martinez and Biao Zhang

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Design & Kinematic Analysis of an Articulated Robotic Manipulator

Simulation and Modeling of 6-DOF Robot Manipulator Using Matlab Software

autorob.github.io Inverse Kinematics UM EECS 398/598 - autorob.github.io

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

Module 1 : Introduction to robotics. Lecture 3 : Industrial Manipulators & AGVs. Objectives. History of robots : Main bodies and wrists

Robotics (Kinematics) Winter 1393 Bonab University

Efficient Seabed Coverage Path Planning for ASVs and AUVs*

Chapter 2 Intelligent Behaviour Modelling and Control for Mobile Manipulators

Self-Calibration of a Camera Equipped SCORBOT ER-4u Robot

PPGEE Robot Dynamics I

Convolutional Neural Network-based Real-time ROV Detection Using Forward-looking Sonar Image

On the Efficient Second Order Minimization and Image-Based Visual Servoing

Generality and Simple Hands

Vision-based localization and mapping system for AUV intervention

Assignment 3. Position of the center +/- 0.1 inches Orientation +/- 1 degree. Decal, marker Stereo, matching algorithms Pose estimation

ON THE RE-CONFIGURABILITY DESIGN OF PARALLEL MACHINE TOOLS

EE Kinematics & Inverse Kinematics

Chapter 2 Mechanisms Abstract

Ch 8 Industrial Robotics

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Positioning an Underwater Vehicle through Image Mosaicking

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Stackable 4-BAR Mechanisms and Their Robotic Applications

Reinforcement Learning for Appearance Based Visual Servoing in Robotic Manipulation

ME5286 Robotics Spring 2014 Quiz 1 Solution. Total Points: 30

Robotics. SAAST Robotics Robot Arms

AUTONOMOUS PLANETARY ROVER CONTROL USING INVERSE SIMULATION

Transcription:

Towards specification, planning and sensor-based control of autonomous underwater intervention Mario Prats Juan C. García J. Javier Fernández Raúl Marín Pedro J. Sanz Jaume-I University, 12071 Castellón, Spain (e-mail: mprats@uji.es). Abstract: This paper presents our recent advances in specification, planning and control of autonomous manipulation actions in underwater environments. The user first specifies the task to perform on a target object found on a seabed image mosaic previously obtained. Then a suitable vehicle position and arm configuration is planned in order to guarantee a successful manipulation. After the vehicle reaches the planned destination, the actual view of the object is matched to the one used for the specification, and the grasp points are tracked, enabling visual servoing of the manipulator end-effector. This allows the arm to compensate for the vehicle motion, thus increasing the reliability of the reaching action. Preliminary experiments done in a simulated environment demonstrate the viability of this approach. Keywords: Intervention Autonomous Underwater Vehicles, Marine systems, Underwater manipulation, User Interfaces, Visual Tracking 1. INTRODUCTION Nowadays a relevant number of field operations with unmanned underwater vehicles (UUVs) in applications such as marine rescue, marine science and the offshore industries, to mention a few, need intervention capabilities in order to undertake the desired task. Currently, most of the intervention operations are being undertaken by manned submersibles endowed with robotic arms or by Remotely Operated Vehicles (ROVs). Manned submersibles have the advantage of placing the operator in the field of operation with direct view to the object being manipulated. Their drawbacks are the reduced time for operation (e.g. few hours), the human presence in a dangerous and hostile environment, and a very high cost associated with the need of an expensive oceanographic vessel to be operated. Work class ROVs, are probably the more standard technology for deep intervention. They can be remotely operated for days without problems. Nevertheless, they still need an expensive oceanographic vessel with a heavy crane and automatic Tether Management System (TMS) and a Dynamic Position system (DP). It is also remarkable the cognitive fatigue of the operator who has to take care of the umbilical and the ROV while controlling the robotic arms. For all these reasons, very recently some researchers have started to think about the natural evolution of the intervention ROV, the Intervention AUV (I-AUV). Without the need for the TMS and the DP, light I-AUVs could theoretically be operated from This research was partly supported by the European Commission Seventh Framework Programme FP7/2007-2013 under grant agreement 248497 (TRIDENT Project), by Spanish Ministry of Research and Innovation DPI2008-06548- C03 (RAUVI Project), and by Fundació Caixa Castelló- Bancaixa P1-1B2009-50. Fig. 1. The CAD model of the I-AUV prototype, being developed in coordination with the University of Girona. cheap vessels of opportunity reducing considerably the cost. In this paper we present our approach to autonomous underwater manipulation by means of a specific application scenario. We assume that a seabed image mosaic has been obtained from a previous survey step and it is input to our work. First, a user analyzes the seabed mosaic and looks for a target object that can be of interest. If an interesting target is found, the operator indicates the best way to manipulate it by using a special graphical interface. Then, a visual template of the target is generated and used for detecting and tracking the object during the next step, when the vehicle is sent again to the water and autonomous intervention starts. This work comprises specification, planning and visionbased control of underwater manipulation actions. In the scientific literature we can find some examples of manipulation applications in underwater robotic scenarios. One of the most challenging ones is the SAUVIM project, which Copyright by the International Federation of Automatic Control (IFAC) 10361

presents advances in autonomous manipulation systems for intervention missions (Marani et al., 2009). 1.1 I-AUV Prototype Figure 1 shows our I-AUV prototype, composed of the GIRONA 500 vehicle (built by the University of Girona) and the Light-weight ARM5E (see http://www.csip.co.uk/). The GIRONA 500 is a reconfigurable autonomous underwater vehicle (AUV) designed for a maximum operating depth of up to 500 m. The vehicle is composed of an aluminum frame which supports three torpedo-shaped hulls of 0.3 m in diameter and 1.5 m in length as well as other elements such as thrusters. The Light-weight ARM5E is a robotic manipulator actuated by 24V brushless DC motors. It is composed of four revolute joints, and can reach distances up to 1 meter. An actuated robot gripper allows for grasping small objects, and its T- shaped grooves also permit handling special tools. An eye-in-hand camera is mounted on the last link of the robot. It is a Bowtech DIVECAM-550C-AL highresolution colour CCD camera, valid up to 100 m underwater. Fig. 2. The considered scenario: the I-AUV first maps the area of interest, and then performs the intervention Fig. 3. The simulated environment showing the I-AUV prototype searching the target 1.2 Considered scenario We consider an object recovery scenario, where our I-AUV has to look for a target object lying on the seabed (e.g. a shotgun), and grasp it. A strategy in two phases is considered (Sanz et al., 2010) (see also Figure 2): in a first stage, the I-AUV is programmed at the surface and receives a plan for surveying a given Region of Interest (RoI). During the survey it collects data from an imaging system and other sensors. After ending this first stage, the I-AUV returns to the surface (or to an underwater docking station) where the data is retrieved and an image mosaic of the seabed is reconstructed. The Target of Interest (ToI) is then identified on the mosaic and the grasping action is specified by means of a suitable user interface. After that, during the second stage, the I-AUV navigates again to the RoI and executes the target localization and the intervention mission in an autonomous manner. This work is part of two coordinated projects, RAUVI (spanish national project) and TRIDENT (European Comission FP7). In the context of these two projects, the main goal is not to navigate to high depths, but to reach new levels of autonomous manipulation capabilities that go beyond the current state of the art. Therefore, we consider intervention missions in depths up to 100m. Most of the assumptions on this paper are work being carried out by other partners. The reader will be appropriately referenced to those works when needed. 1.3 Simulated environment A simulated environment has been developed for this project (see Figure 3). It has been implemented in C++ using the OpenSceneGraph libraries and osgocean. This allows to visualize underwater effects such as silt, light attenuation, water distorsion, etc. More concretely, the simulation environment includes: Fig. 4. An overview of the software architecture. Real and simulated robots use the same network interface. See (Palomeras et al., 2010) for details. The I-AUV 3D model, including both the vehicle and the arm. Arm kinematics have also been implemented, thus allowing to move the arm joints. A virtual camera attached to the bottom of the vehicle and facing downwards. Another virtual camera has been attached to the wrist of the arm. These cameras capture images of the simulated environment in real-time. A lamp, placed on the bottom part of the vehicle and pointing towards the floor. A seabed model including a texture. A shotgun, lying on the seabed, which is the object that the I-AUV has to recover. The simulator has been implemented as a network server, where other software modules can be connected in order to update the pose of the vehicle-arm, or to receive input from sensors such as joint odometry or images from the virtual cameras. This allows to use it as a mere visualizer, where the actual control algorithms are not part of the 10362

The width and height of the bounding box. The grasp points in pixel coordinates with respect to the bounding box origin. With the bounding box information, a template containing only the ToI is created and later used for object detection and tracking (see Section 4.1). 3. VEHICLE POSE PLANNING From the specification of the grasp to perform, it is necessary to compute a suitable vehicle position on top of the object so that the required grasp is feasible. This can be done by computing the inverse kinematics of the arm-vehicle kinematic chain. 3.1 Arm modelling Fig. 5. Different grasps computed on several objects with a specially designed user interface Fig. 6. An example illustrating the grasp specification parameters. simulator, but external to it, and can be interfaced to the real robot without modification. This leads to more realistic simulations. In fact, the simulator is external to the control architecture, and interfaced to it in the same way the real robot is interfaced (see Figure 4). This allows to easily switch between the real robot and the simulated one without modifying the rest of modules. 2. TASK SPECIFICATION A graphical user interface (GUI) for target identification and task specification has been implemented (refer to García et al. (2010) for details). A seabed image mosaic is loaded in the GUI, and the user first looks for the ToI on it. After selecting the ToI, the intervention task is indicated by choosing between different pre-programmed actions such as recovery (grasping), hooking a cable, etc. The user interface contains built-in processing and planning algorithms that allow to automate the specification process when possible. If the automatic methods fail, the user can always specify the task parameters in a manual manner. For this work we consider a grasping task, which can be specified by enclosing the ToI in a bounding box, and selecting a pair of suitable grasp points, as shown in Figure 5 for different objects. When the specification is finished, a XML file containing the task parameters is generated. This file includes (see also Figure 6): The ToI bounding box origin with respect to the image origin, represented as (x, y, α). (x, y) are pixel coordinates and α is the orientation of the bounding box with respect to the horizontal. Figure 7 shows the geometric model of the I-AUV arm, whereas Table 1 contains the Denavit-Hartenberg parametrization of the model. From these, it is straightforward to compute the direct and inverse kinematics model of the arm, i.e. x E = DK a (q a ), and q a = IK a (x E ), being q a = (q 1 q 2 q 3 q 4 ) the arm joint angles, and x E the arm end-effector cartesian pose. Table 1. D-H parameters of the Light-weight ARM5E Link θ (deg.) d (mm) a (mm) α (deg.) 1 0 0 80.52 90 2 5 0 442.78 0 3 115 0 118.52 90 4 0 0 0 0 Jaw 0 499.38 0 0 3.2 Vehicle-arm inverse kinematics Given a pair of grasp points in pixel coordinates with respect to the origin of the bounding box, the goal is to find a suitable vehicle position than allows the arm to reach those grasp points. First, a 2D grasp frame, G is created in the middle point of the line joining the two grasp points (see Figure 6). This frame will be used as the target for the grasping control system. We assume that the seabed image mosaic used for the grasp specification is geo-referenced with respect to a fixed global frame W. This is not a major assumption, since it is common to have positioning sensors on AUV s, and it is possible to extract the 3D structure of the mosaic from inter-frame matching techniques (Ridao et al., 2010). Therefore, there is a known and direct conversion between pixels and meters, that allow to express the grasp frame with respect to a global 3D frame W. We represent this transform as an homogeneous transformation matrix W M G, being G the 3D cartesian frame corresponding to the image 2D frame G. At this point the kinematic model of Table 1 is extended in order to include four additional degrees of freedom (DOF) corresponding to the vehicle 3D position and yaw angle, i.e. q = (q v q a ), being q v = (x y z α). This constitutes a 8 DOF kinematic system for which the direct and inverse kinematic model is computed, leading to x E = DK va (q), and q = IK va (x E ). 10363

Fig. 7. The kinematic model of the Light-weight ARM5E. Fig. 8. From the grasp specification, a suitable vehicle/arm configuration is computed through the inverse kinematics of the system. The goal of the grasping action is to match E with G. It is therefore possible to apply the inverse kinematic model to the grasp frame origin in world coordinates W g, as q = IK va ( W g), leading to a suitable q v and q a for the given task. Figure 8 shows the result of this process in the task of grasping an amphora. Finally, it is convenient to represent the desired vehicle pose in a way relative to the ToI pose, instead of a fixed absolute frame. In fact, it is possible that, due to the underwater currents, when the I-AUV performs the intervention the target object is not found in the same position it was in the mosaic. Therefore a vehicle pose given with respect to the absolute frame of the mosaic would not be valid. It is then necessary to compute the vehicle pose relative to an object-related frame, e.g. G. This can be done as G M V = ( W M G ) 1 W M V, where W M V is computed from q a as W M V = T x (x)t y (y)t z (z)r z (α), being T i and R i the homogeneous matrices representing translation and rotation on axis i. 4. VISION-BASED REACHING After planning a suitable vehicle pose on top of the ToI, we assume that the vehicle navigation system is able to find the object and to reach the planned pose with respect to it (Bonin-Font et al., 2010). Once this step has been accomplished, the goal is to control the robot hand towards the grasp points. It cannot be assumed that the vehicle can accurately maintain its position on the desired reference. Instead, it is very likely that its position is disturbed by underwater currents and other dynamic effects. It is therefore necessary to continuously track the ToI and adjust the arm motion accordingly. For that, a visionbased control approach has been implemented. In this work, we restrict the grasping action to be planar, i.e. the ToI is viewed and approached from the top, so that only 4 DOFs need to be tracked and controlled: translation, rotation and scale in the image plane. 4.1 Visual tracking In a manipulation context, vision is normally used in order to obtain an estimation of the object position and orientation that allows the robot to perform a specific action with it. Pose estimation techniques in robot vision can be classified in appearance-based vs. model-based approaches (Lepetit and Fua, 2005). Appearance-based methods work by comparing the current view of the object with those stored in a database containing previously acquired views from one or multiple angles. The main advantage of these methods is that they do not need a 3D object model, although it is necessary to have at least one previous view of the object taken from a known pose. In contrast, model-based methods need a 3D model as input, but better accuracy and robustness is normally obtained. In our approach we cannot assume that an object 3D model is always available. However, we assume that a previous view of the object has been taken during the survey phase, and that it is georeferenced in the seabed mosaic. Therefore, we adopt an appearance-based method. More concretely, a template tracking method has been implemented. During the task specification on the seabed mosaic, after the ToI has been selected, its bounding box is used for extracting a template of the object (see Figure 6). This template is matched at each iteration on the video stream during the intervention phase. A homography between the template and its match in the current image 10364

is computed and used for transforming the grasp points to its actual pose. For this, the Efficient Second Order minimization method has been adopted (Benhimane and Malis, 2004). This method computes in real-time the homography relationship, H, that minimizes the sumof-squared-difference between a given template and the current image. It can deal with planar transformations including rotations and change of scale. This homography allows to transform the origin of the grasp frame from the specification template to the current image as: ( ug v G w G ) = H ( ug v G 1 ) As an example, Figure 9 shows a sequence where the shotgun of Figure 6 is tracked taking as the template the bounding box defined during specification. (a) Vehicle position 4.2 Vision-based control From the corresponding grasp frame in the current image, a simple image-based control approach has been implemented for keeping the grasp frame centered in the image, and aligned with the image axis. Being s = (x y α), the feature vector containing the current grasp frame position and orientation, and s = (c x c y 0) another feature vector representing the center of the image, the following control equation is computed at each iteration: v I = λl + s (s s ) being L s is the interaction matrix that relates camera motion to image feature motion (Chaumette and Hutchinson, 2006), and v I = (v x v y w z ) is the computed velocity that needs to be set in the camera frame in order to move s towards s. With this simple control equation, only 3 DOFs are controlled, translation and rotation in the image plane. Motion towards the target is controlled by setting a constant velocity, v z, in the direction perpendicular to the image plane. Being v C = (v x v y v z 0 0 w z ) the final velocity to be sent to the camera frame, it is transformed to arm joint velocity via the arm end-effector jacobian J E and the camera-to-end-effector screw transformation matrix, E W C. 4.3 Results q = J + E E W C v C Figure 9 show the results of two vision-based reaching experiments, one of them performed with a fixed vehicle position, and the other one by introducing some disturbances to the vehicle position. The tracking method and control approach of the previous sections have been adopted for these experiments, setting the parameters to c x = 120, cy = 160 (virtual camera image is 320 240), λ = 0.0025 and v z = 0.03 m/s. In both cases, the robot hand is able to reach the target successfully. Notice in bottom row of Figure 9 how the arm motion compensates the vehicle position error. (b) Grasp frame position in the image, with vision-based arm control (blue), and without it (red) Fig. 10. Vehicle position disturbances (a) and its effect on the image position of the grasp frame (b). Figure 10 shows the position disturbances introduced into the vehicle, and its effect on the image position of the ToI, with and without enabling vision-based control. More concretely, Figure 10a shows the 3D trajectory of the vehicle control frame, whereas Figure 10b shows the image trajectory of the tracked grasp frame, with and without vision-based control. If vision-based control of the arm is not enabled (shown in red), the position of the ToI in the image cannot be controlled, and it is finally lost on the top of the image. On the contrary, by performing the control approach of the previous section, the arm can visually compensate for the vehicle motion, and the ToI can be kept inside the image, in a reference position. 5. CONCLUSION The lack of a continuous communication channel between AUVs and a surface station, makes it necessary to develop autonomous control strategies that do not require human intervention. Autonomous manipulation is one of the key capabilities that need to be developed for these robots. This paper has presented our ongoing work on specification, planning, perception and control of intervention missions with autonomous underwater robots. 10365

Fig. 9. The I-AUV tracking and reaching for the object. Top row: without vehicle disturbances. Bottom row: with disturbances in the vehicle position We adopt an approach in which the task is specified by a human operator on the surface, after analyzing a seabed image mosaic previously obtained. The specification step provides to the robot all the necessary information for (i) finding and tracking the target on the seabed, and (ii) performing the intervention. Tracking is performed by finding the object template obtained during specification in the current image taken on the robot, using secondorder minimization techniques. The visually-detected target position is used as feedback to a control approach that moves the robot hand towards the grasp points while tries to keep the target in a desired part of the image. As result, the I-AUV arm is able to reach the target even under disturbances in the vehicle position. The feasibility of this approach has been demonstrated in a simulated environment. 5.1 Future work Future work will focus on applying these techniques to the real robot. We are currently developing the lowlevel control of the Light-weight ARM5E, and interfacing the eye-in-hand camera with the PC. Once these steps are accomplished, the algorithms herein presented will be directly applicable to the real arm. One potential improvement to this work is to implement the visual tracking algorithm in a FPGA, as we have already done in previous work (Marin et al., 2009). This would boost the tracking performance and would free the PC CPU from the vision task. Finally, we would also like to improve the user interface from the ideas of Backes et al. (1997), in order to build a distributed and collaborative environment where several scientists in different places can interact with the system. REFERENCES Backes, P., Tharp, G., and Tso, K. (1997). The web interface for telescience (wits). In IEEE International Conference on Robotics and Automation, volume 1, 411 417. Benhimane, S. and Malis, E. (2004). Real-time imagebased tracking of planes using efficient second-order minimization. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 943 948. Bonin-Font, F., Burguera, A., Ortiz, A., and Oliver, G. (2010). Towards monocular localization using ground points. In IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). Bilbao, Spain. Chaumette, F. and Hutchinson, S. (2006). Visual servo control, part i: Basic approaches. Robotics and Automation Magazine, 13(4), 82 90. García, J.C., Fresneda, J.F., Sanz, P., and Marín, R. (2010). Increasing autonomy within underwater intervention scenarios: The user interface approach. In 2010 IEEE International Systems Conference, 71 75. San Diego, CA, USA. Lepetit, V. and Fua, P. (2005). Monocular model-based 3d tracking of rigid objects. Foundations and Trends in Computer Graphics and Vision, 1(1), 1 89. Marani, G., Choi, S.K., and Yuh, J. (2009). Underwater autonomous manipulation for intervention missions auvs. Ocean Engineering, 36(1), 15 23. Autonomous Underwater Vehicles. Marin, R., Leon, G., Wirz, R., Sales, J., Claver, J., Sanz, P., and Fernandez, J. (2009). Remote programming of network robots within the uji industrial robotics telelaboratory: Fpga vision and snrp network protocol. IEEE Transactions on Industrial Electronics, 56(12), 4806 4816. Palomeras, N., García, J.C., Prats, M., Fernández, J.J., Sanz, P.J., and Ridao, P. (2010). Distributed architecture for enabling autonomous underwater intervention missions. In 4th Annual IEEE International System Conference, 159 164. San Diego, CA, USA. Ridao, P., Carreras, M., Ribas, D., and Garcia, R. (2010). Visual inspection of hydroelectric dams using an autonomous underwater vehicle. Journal of Field Robotics, 27, 759 778. Sanz, P., Prats, M., Ridao, P., Ribas, D., Oliver, G., and Orti, A. (2010). Recent progress in the rauvi project. a reconfigurable autonomous underwater vehicle for intervention. In 52th International Symposium ELMAR, 471 474. Zadar, Croatia. 10366