WELL STRUCTURED ROBOT POSITIONING CONTROL STRATEGY FOR POSITION BASED VISUAL SERVOING

Similar documents
25 Hz. 25 Hz. X M(z) u c + u X. C(z) u ff KF. Θ, α. V(z) Θ m. 166 Hz Θ. Θ mot. C(z) Θ ref. C(z) V(z) 25 Hz. V(z) Θ, α. Interpolator.

Visual Servoing Utilizing Zoom Mechanism

Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach

Robot Vision Control of robot motion from video. M. Jagersand

Vision-Based Control of the RoboTenis System

Visual Servo...through the Pages of the Transactions on Robotics (... and Automation)

Keeping features in the camera s field of view: a visual servoing strategy

A comparison between Position Based and Image Based Visual Servoing on a 3 DOFs translating robot

LUMS Mine Detector Project

Reinforcement Learning for Appearance Based Visual Servoing in Robotic Manipulation

Visual servoing for a pan and tilt camera with upsampling control

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

PSO based Adaptive Force Controller for 6 DOF Robot Manipulators

Cecilia Laschi The BioRobotics Institute Scuola Superiore Sant Anna, Pisa

Visual Servoing for the Robotenis System: a Strategy for a 3 DOF Parallel Robot to Hit a Ping-Pong Ball

Control of a Robot Manipulator for Aerospace Applications

Robotics 2 Iterative Learning for Gravity Compensation

Calibration and Synchronization of a Robot-Mounted Camera for Fast Sensor-Based Robot Motion

Control Design Tool for Algebraic Digital Controllers

The end effector frame E. The selected or set vector: κ or ζ w.r.t. robot base frame ω1 or ω2 w.r.t. vision frame. y e. The vision frame V

Proceedings of the 8th WSEAS Int. Conference on Automatic Control, Modeling and Simulation, Prague, Czech Republic, March 12-14, 2006 (pp )

Manipulator trajectory planning

6-dof Eye-vergence visual servoing by 1-step GA pose tracking

On Evolving Fuzzy Modeling for Visual Control of Robotic Manipulators

Performance of a Partitioned Visual Feedback Controller

A New Algorithm for Measuring and Optimizing the Manipulability Index

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators

TRAINING A ROBOTIC MANIPULATOR

Dynamic Analysis of Manipulator Arm for 6-legged Robot

Intermediate Desired Value Approach for Continuous Transition among Multiple Tasks of Robots

A Vision-Based Endpoint Trajectory and Vibration Control for Flexible Manipulators

Survey on Visual Servoing for Manipulation

Adaptive Visual Servoing by Simultaneous Camera Calibration

1 Trajectories. Class Notes, Trajectory Planning, COMS4733. Figure 1: Robot control system.

INSTITUTE OF AERONAUTICAL ENGINEERING

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

A NOUVELLE MOTION STATE-FEEDBACK CONTROL SCHEME FOR RIGID ROBOTIC MANIPULATORS

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions.

Motion Perceptibility and its Application to Active Vision-Based Servo Control

A New Algorithm for Measuring and Optimizing the Manipulability Index

Visual Servoing by Partitioning Degrees of Freedom

VIBRATION ISOLATION USING A MULTI-AXIS ROBOTIC PLATFORM G.

A Cost Oriented Humanoid Robot Motion Control System

Integration of visual cues for active tracking of an end-effector

3D Tracking Using Two High-Speed Vision Systems

Simulation-Based Design of Robotic Systems

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction

Image Based Visual Servoing Using Algebraic Curves Applied to Shape Alignment

EVALUATION OF THE PERFORMANCE OF VARIOUS FUZZY PID CONTROLLER STRUCTURES ON BENCHMARK SYSTEMS

The University of Missouri - Columbia Electrical & Computer Engineering Department EE4330 Robotic Control and Intelligence

Kinematics and dynamics analysis of micro-robot for surgical applications

An Interactive Technique for Robot Control by Using Image Processing Method

FREE SINGULARITY PATH PLANNING OF HYBRID PARALLEL ROBOT

Chapter 1: Introduction

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Task selection for control of active vision systems

Shortest Path Homography-Based Visual Control for Differential Drive Robots

timizing Hand/Eye Configuration for Visual-Servo Systems

Design of Visual Servo Robot Tracking System Based on Image Feature

How to cope with a closed industrial robot control: a practical implementation for a 6-dof articulated robot

An actor-critic reinforcement learning controller for a 2-DOF ball-balancer

Trajectory Planning of Redundant Planar Mechanisms for Reducing Task Completion Duration

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

autorob.github.io Inverse Kinematics UM EECS 398/598 - autorob.github.io

Automatic Control Industrial robotics

Hand-Eye Calibration from Image Derivatives

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Table of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE

Singularity Handling on Puma in Operational Space Formulation

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector

DEVELOPMENT OF TELE-ROBOTIC INTERFACE SYSTEM FOR THE HOT-LINE MAINTENANCE. Chang-Hyun Kim, Min-Soeng Kim, Ju-Jang Lee,1

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras

PRACTICAL SESSION 4: FORWARD DYNAMICS. Arturo Gil Aparicio.

Task analysis based on observing hands and objects by vision

IMAGE MOMENTS have been widely used in computer vision

Modeling of Humanoid Systems Using Deductive Approach

Multi-sensor data fusion in sensor-based control: application to multi-camera visual servoing

Auto-focusing Technique in a Projector-Camera System

HAREMS: Hierarchical Architecture for Robotics Experiments with Multiple Sensors

Torque-Position Transformer for Task Control of Position Controlled Robots

Pulsating flow around a stationary cylinder: An experimental study

Segmentation and Tracking of Partial Planar Templates

IMAGE DE-NOISING IN WAVELET DOMAIN

Introduction to visual servoing

Serial Manipulator Statics. Robotics. Serial Manipulator Statics. Vladimír Smutný

Robotics 2 Visual servoing

New shortest-path approaches to visual servoing

AMR 2011/2012: Final Projects

Introducing Robotics Vision System to a Manufacturing Robotics Course

MODELING AND SIMULATION METHODS FOR DESIGNING MECHATRONIC SYSTEMS

Inverse Kinematics (part 1) CSE169: Computer Animation Instructor: Steve Rotenberg UCSD, Winter 2018

DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING

KINEMATIC AND DYNAMIC SIMULATION OF A 3DOF PARALLEL ROBOT

Written exams of Robotics 2

10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators

A Framework for Compliant Physical Interaction based on Multisensor Information

A MULTI-ROBOT SYSTEM FOR ASSEMBLY TASKS IN AUTOMOTIVE INDUSTRY

Optimal Design of a 6-DOF 4-4 Parallel Manipulator with Uncoupled Singularities

Lecture VI: Constraints and Controllers. Parts Based on Erin Catto s Box2D Tutorial

Transcription:

Proceedings of the 001 IEEE International Conference on Robotics & Automation Seoul Korea May 16 001 WELL STRUCTURED ROBOT POSITIONING CONTROL STRATEGY FOR POSITION BASED VISUAL SERVOING M. Bachiller* A.Adán** V. Feliu*** C. Cerrada**** * Facultad de Ciencias UNED. SPAIN (marga@dia.uned.es) ** Escuela Técnica Superior de Ingeniería Informática Universidad de Castilla La Mancha. SPAIN (aadan@infcr.uclm.es) *** Escuela Técnica Superior de Ingenieros Industriales Universidad de Castilla La Mancha. SPAIN (vfeliu@indcr.uclm.es) **** Escuela Técnica Superior de Ingenieros Industriales UNED. SPAIN (ccerrada@ieec.uned.es) Abstract In this paper we study a D visual servoing system composed by a robot manipulator a CCD camera mounted on the end effector of the robot and a specific hardware. The use of computer vision as feedback transducer strongly affects the closed loop dynamics of the overall system so that a visual controller is required for achieving fast response and high control accuracy. Due to the long time delay to generate the control signal it is necessary to carefully select the visual controller. The objective of this work is to present a well structured design of efficient controllers to keep a desired relative D position between the object and the camera. Besides an experimental setup has been built and used to evaluate the performance of the position based dynamic look and move system. Several results relative to different type of controller are presented. These results demonstrate the possibility of getting competitive results in real time performance with respect to more closed solutions shown in other works while maintaining the advantage of easily system adaptability. 1. INTRODUCTION Intelligent robots require sensor based control to perform complex operations and to react to changes in the environment. The information about the own system and its environment can be obtained from a great variety of sensors. Vision is probably the sensor that provides the most and richest sensing information but the processing of such that information becomes the most complicated. Nevertheless computer vision has improved a lot in the last years and it is being frequently used on robotics systems although with serious limitations in real time applications due to the time necessary for image processing. The use of computer vision as feedback transducer strongly affects the closed loop dynamics of the overall system. Latency is the most significant dynamic characteristic of vision transducers and it has many sources including transport delay of pixels from the camera to vision system image processing algorithms control algorithms software and communications with the robot. This delay can cause instability in visual closed loop systems. To achieve fast response and high control accuracy the design of a specific visual feedback controller is required. Visual servoing is the result of merging several techniques in different fields including image processing kinematics dynamics control theory and real time computing. An excellent overview of the main issues in visual servoing is given in []. Visual servoing architectures for controlling manipulators using an eye in hand configuration can be classified in two fundamental categories: dynamic look and move structures and visual servo structures. When the control architecture is hierarchical and uses the vision system to calculate the set of inputs to the joint level controller making use of inner joint feedback loops to stabilise the robot it is referred to as a dynamic look and move structure. In contrast visual servo structure eliminates the robot controller replacing it with a visual controller in such a way that it is used to compute directly the joints inputs being the only loop used to stabilise the mechanism. Concerning to the controlled variable visual servoing systems with eye in hand configuration are classified in two groups: image based control systems and position based control systems. In an image based control system the error variable is computed in the Dimage space. This approach eventually reduce the computational delay eliminate the necessity of image interpretation and eliminate errors due to sensor modelling and camera calibration. However the controller design is very complex. On the other hand in a position based control system the error variable is computed in the D Cartesian space. The main advantage of the last approach is that position of the camera trajectory is controlled directly in the Cartesian space. 078064759/01/$10.00 001 IEEE 541

In this work we built a visual servoing system over a modular conception. This means that the overall system is composed of a set of independent modules that are put together configuring an open system. In this kind of systems any module can be replaced for other one with the same functionality and the visual controller computation procedure will not change. The goal of any visual servoing system is the same independently of how it is constructed: to control the robot s end effector pose relative to the target object pose. But the main advantage of this new consideration is that if a module have to be changed for any reason it will be necessary neither to replace the others nor to redesign the full platform but just to compute a new controller. In that sense our approach deals with the problem of controller design from a more generic point of view. This generality has not been found in previous researches. On the contrary most part of the works on visual servoing reveals particular solutions applied to their specific platforms. The main differences between them appear in the type of control strategy used. There has been a significant amount of research activity on image based control methods [1 4 5 9] whereas there have been only a few researchers working on position based control methods [6 7 8 10]. This tendency can be justified because image based systems usually reduce computation delays (costly time consuming processes of image interpretation are not necessary) and because they also eliminate errors due to sensor modelling and camera calibration processes. In contrast the controller is quite embedded in the overall system and its design strongly depends on the robot and the vision systems chosen. Anyway real time performance use to be better in image based controllers than in position based controllers. But from the point of view of building open systems it seems more adequate to consider position based controllers and dynamic look and move structures. There are several reasons to think so: Position based methods allow a direct and more natural specification of the desired trajectories for the end effector in Cartesian coordinates. Many robots have interface for accepting Cartesian velocity or incremental position commands. In that sense they can be replaceable one for each other and all of them can be considered as black boxes. Conceptually dynamic look and move structure is more open than visual servo because vision system is separated from the controller and from the robot itself. The controller design can take advantage of a wellstructured problem like robot control. The use of more advanced control techniques is also propitiated because the problem can be focussed as a pure control problem. In this article we design a position based dynamic look and move system built with an industrial manipulator and an offtheshelf camera mounted on its end effector. One purpose is to demonstrate the possibility of getting competitive results in real time performance with respect to more closed solutions shown in other works and other control configurations while maintaining the advantage of easily system adaptability. The remaining of this paper is structured as follows. Firstly in section we deduce the model of the complete visual servoing system. In section the design of several controllers is described. In the next section we evaluate these controllers. This section includes simulations and real experiments to compare the response of each controller and its performance. Finally the discussion on control performance is presented in the section 5.. VISUAL SERVOING SYSTEM MODEL A visual servoing system can be considered as the integration of at least the following set of components: robotic subsystem vision subsystem control subsystem and communications. A scheme of such those components and their respective links are depicted in figure 1. Clock PC : VISION AND CONTROL SUBSYSTEM Moving object model Vision Subsystem Estimator Controller Control Subsystem ROBOTIC SUBSYSTEM Robot Controller camera Figure 1: Architecture of the robotic visual servoing system The way this system works can be explained as follows. The vision subsystem is able to determine at a given sampling rate an error position vector in Cartesian coordinates currently proportional to the sensed difference between the D target position and the D robot s end effector position. This information is sent to the control computer at the same rate through the communication link. Then the control subsystem generates the control signal to drive correctly to the robot in order to cancel the error position. Finally this signal is transmitted to the robotic subsystem through the communication link. Notice that the overall system is a multirate system in nature: the sampling interval of the manipulator is T r while the vision subsystem sampling period is T v being T v larger than T r. To simplify the study from the control point of view the vision period can be adjusted to be n times the robot period being n an integer value i.e.: T v = n.t r (1.1) T v T r 54

From notation concerns we will use z to represent the z transform of a system sampled with a period T v and ẑ to the ztransform of a system sampled with a period T r..1. Subsystems Modelling Figure 1 of the previous section shows the architecture of a system like the considered. As it has been explained there four modules or subsystems integrate the full system. In order to get a valid model for each one several considerations must be done in advance. The control subsystem is the objective of the design and consequently it will be considered in the next section. With respect to the communication links they can be considered from the dynamical point of view as small delays that can be integrated in the vision or in the robot time periods. Therefore no dynamical modelling is required for them and it is only necessary to have accurate dynamic models of the plant to be controlled (the robotic subsystem) and the sensor used (the vision subsystem). Vision Subsystem Model The vision subsystem provides in each sampling instant a set of values in the D space as a triplet of Cartesian coordinates p = ( x y z) representing the position increment that the robot has to do to reach the target object. When the overall system is working with T v period the vision subsystem can be considered as a pure delay so its transfer functions matrix is: V(z) = diag (z 1 z 1 z 1 ) (1.) The figure shows the representative block of the vision subsystem where p d is the desired object position p obj is the actual object position and p r is the actual robot position. It also considers a noise signal r s produced for example by a bad illumination or during the digitizing process. manipulator with its actuators and their current feedback loops can be considered as a Cartesian servo device. In principle the robotic subsystem can be modelled as an integrator: its output is the actual robot position that is equal to the previous one plus the incremental position input signal. It can be seen experimentally that this is not exactly the real behaviour but it presents a small delay in performing the movement corresponding to the input signal. Due to this the transfer function that is proposed for each Cartesian coordinate is a second order system so the transfer functions matrix of the robotic subsystem is given by: (1.) n n x y nz G( ) = diag ( 1)( β x ) ( 1)( β y ) ( 1)( β z ) The figure shows the representative block of the robotic subsystem. The sampling time is T r and e is a noise signal that represents small errors due to data acquisition and that will be used in the identification process. That noise is modelled as a sequence of independent and identically distributed random variables with zero mean. p G( z ^ ) T r Figure : Representative block of the robot system.. System s block diagram construction If we join all subsystems in accordance with the position based dynamic look and move structures we obtain the system s block diagram. It is shown in figure 4. Robotic subsystem e noise( z ^ ) p r e r s r s Noise ( z ^ ) p obj pr V (z) T v p d Vision subsystem p p obj V(z) T v p d Vision Subsystem p R(z) Control Subsystem G( z ^ ) T r Robotic Subsystem p r Figure : Representative block of the vision subsystem Robotic Subsystem Model When the manipulator works through a specific command line to change its trajectory in real time it can be experimentally seen that the robot behaves as a decoupled multivariable system. It can be considered as composed by 6 independent loops (three of them concerning to the Cartesian position and the three others concerning to the orientation). In this work we are only taking into account the three loops relative to the Cartesian position. Thus the Figure 4: System s block diagram Using the equation (1.1) and doing some manipulations the global block diagram converts into the one shown in the figure 5 where T ( ) is given by: T ( z) diag n n n = 1 1 1 (1.4) 54

Figure 5: Converted system's block diagram In order to design the controller we need to have the transfer function matrix of the robotic subsystem sampled with period T v. However we know the transfer function matrix of the robotic subsystem sampled with period T r. The identification of this matrix is normally divided into two parts: (1)Estimation of the continuous transfer function matrix from the discrete transfer function matrix sampled with period T r. ()Calculation of the discrete transfer function matrix sampling the continuous transfer function matrix with period T v. Applying this technique to G () we obtain G(z). Then the block diagram of figure 5 can be changed to the simplified block diagram shown in figure 6. pobj p obj V(z) R( z ) T( z ^) Gv( z ^) rs r s pd p d. Identification V ( z ) R ( z ) G ( z ) Figure 6: Simplified block diagram noise Once a model has been selected to represent a subsystem the unknown parameters identification is required. In general an identification experiment is performed by exciting the system (using some sort of input signal such as a step a sinusoid or a random signal) and observing its input and output over a time interval. These signals are normally recorded in a computer for information processing. Immediately after some statistically based method is used to estimate the unknown parameters of the model such as the coefficients in the difference equation. The model obtained is then tested to see whether it is an appropriate representation of the system. If this is not the case some more complex model must be considered its parameters estimated and the new model validated. The used vision subsystem is formed by a camera a PC where the image processing hardware is installed. In the previous sections we have modelled this subsystem as a pure delay so there are not unknown parameters. The used robotic subsystem is a 6 DOF industrial manipulator Staübli RX90 with a specific command line to modify its trajectory in real time or ALTER line. This command line allows to introduce new values each 16 milliseconds it is noise pr p r to say in our platform T r = 16 milliseconds. In this case we consider that the model of the robotic subsystem can be defined by a linear second order system. To identify the unknown parameters we use an autoregressive moving average with exogeneous model (ARMAX). The obtained transfer function matrix is given by: 0.65 0.6840 0.7081 G( ) = diag ( 1)( 0.810) ( 1)( 0.194) ( 1)( 0.95) (1.5) Finally to design the control subsystem is necessary in first place to estimate the continuous transfer functions matrix (the used method is described in [11]) and in second place to sample it with period T v. Considering n=10 and substituting into equation (1.1) yields to the following G (z) matrix: 8.567z 1.6499 8.574z 1.4765 8.64z 1.459 G( z) = diag z( z 1) z( z 1) z( z 1) (1.6). CONTROL SUBSYSTEM DESIGN The control objective is to move the robot with the camera mounted on its end effector in such a way that the projection of a static object appears in the desired location of the image. The more interesting solution is to achieve a response with no oscillations during the movement when the input is a step while trying that the settling time be small. Provided that the step can be of big magnitude it will be interesting to check the cartesian acceleration value in order to avoid the saturation of some motors. The first controller has been designed via root locus. These controllers have the slowest performance but they produce the smallest values of the Cartesian acceleration reference. A proportional controller is enough to control the robotic system. The gain is chosen so that all the poles be real poles. The obtained controller is given by: R v ( z) = diag (0.014 0.00 0.0 1) (1.7) The second controller essayed has been designed by applying pole placement methodology [1 9]. The closed loop poles for each cartesian coordinate are at p 1 =0.1607 p =0.066 and p =0.0001. The settling interval is less than the value obtained with the proportional controller. We also propose to use optimal control to design the visual controller. The objective of this method is to calculate the closed loop transfer function matrix which minimise the integral squared error between the output of the robotic subsystem and a desired output (p rd ). Of course the controller depends on the desired output. In order to get a smooth output and the less settling time we consider that the transfer functions matrix relative to the designed output is given by: a a x y a z (1.8) p ( z) = rd ( z 1)( z a x ) ( z 1)( z a y ) ( z 1)( z az ) 544

where the values of parameters (a x a y a z ) are chosen to obtain the best behaviour of the output concerning to speed and cancellation of oscillations. In the simulations we have used a value of 0. for all of them. The obtained controller is: 0.0446z 0.045z R ( z) = diag( z 0.9849z 0.5180z 0.0717 0.0441z 0.0449z 0.047z 0.0451z ) z 0.9777z 0.516z 0.066 z 0.9754z 0.5159z 0.0644 (1.9) Other design possibility is to use a deadbeat system which produces the least settling interval. The transfer function matrix of these controllers: R z) = diag z 0.0979z z 0.1615 0.0995z z z 0.1469 ( 4. SIMULATIONS AND EXPERIMENTS 0.0995z z z 0.1419 (1.10) The next objective is to evaluate the different control strategies in order to select the more efficient controller. In this work we have used MATLAB simulation software to carry out this evaluation. In first place this section summarizes the results obtained from simulations of all the designed feedback controllers. In second place it presents the results obtained in the experimental platform. 4.1 Simulation results In order to compare the different controllers it is necessary to define a series of criteria that allow evaluating the behaviour of the robot system output. In this work we have chosen to consider the magnitude of settling interval (n s ) and the maximum value of the Cartesian acceleration reference (r a ) that the robot are going to undergo in order to achieve the required position increase. Figure 7 presents the robot output obtained in each case considering a noise signal r s with a standard deviation equal to 0.1 mm. and table 1 shows the values relative to the coordinate X of each controller for an input step equal to 10 mm. X (mm) 1 10 8 6 4 0 0 4 6 8 10 1 14 16 18 0 Intervals Figure 7: Robot system outputs o o Proportional Pole placement Optimal control Deadbeat From the point of view of the settling interval the deadbeat controller is the best control strategy. However if we consider also the Cartesian acceleration reference value the optimal controller is the best control method. Using this controller steady state error becomes zero in 6 samples besides getting the smallest values of the acceleration. n s r a (mm/sec ) Classic controller 15 1.75 Poles Placement 5 5.55 Optimal controller 6 7.870 Dead Beat 4 61.1710 Table 1 4. Experiments A set of experiment has been performed to validate the results obtained in simulations in order to demonstrate the capability of the control methods designed in this work. So we have developed a visual feedback control system consisting of a Staübli RX90 manipulator a PC (Pentium 1 Hz) a camera and MATROX image processing hardware. The vision computer is connected to the robotic subsystem through RS series line working to 1900 baud and calculates the position increment each 160 ms. However it would be possible to reduce the value of vision period using a more powerful image processing hardware The manipulator s trajectory is controlled via the Unimate controller s Alter line and requires path control updates once every 16 ms. A photograph of the experimental platform is shown in the figure 8. Figure 8: Experimental setup We have demonstrated in the simulations that the dead beat controller produces the fastest response. Figure 9 shows the robot system output obtained in the experimental platform for a step of 10 mm along x axis. The setting interval is 0.64 s. This controller improves the results presented by [1] where the setting interval was 0.85 s. Nevertheless when we also consider the cartesian acceleration reference value the optimal controller is the best control strategy. This regulator achieves to pose the robotic manipulator respect to the target in 6 samples (0.96 sec.) besides it produces small values of the acceleration reference avoiding the saturation of some joint motor. This method improves the results presented by Vargas et al. [8] where the settling interval was.5 sec. 545

Figure 10 shows the robot system output for an input step equal to 45 mm. X(mm) Figure 9: Step response with a deadbeat controller X(mm) 1 10 8 6 4 0 0 4 6 8 10 1 14 16 18 0 Intervals 50 45 40 5 0 5 0 15 10 5 0 0 4 6 8 10 1 14 16 18 0 Intervals Figure 10: Step response with an optimal controller 5. CONCLUSIONS In this article we have designed a position based dynamic look and move system for a 6 DOF industrial manipulator and a camera mounted on the end effector. The robot works using the ALTER facility. When operating in ALTER mode the robot is commanded in cartesian coordinates; thus it is not necessary to solve the inverse kinematics problem and no direct control on the servomotors is available. In first place it was necessary to obtain a model of the robotic system. This model was validated in some experimental test comparing its response to the real response of the robotic system. Several controllers have been designed studying the behaviour of the output of robotic system The next step was to examine the different control strategies from simulations to select the most efficient controller. These controllers have been evaluated in the experimental platform to validate the results obtained in the simulations. Its most important characteristic is that they are simple so they can be implemented in real time. The results that we present in this paper demonstrate the effectiveness of the designed position based dynamic look and move system to pose the robot with respect to a static object. Besides as the vision subsystem and the robot subsystem are independent it is easy to change some of them without having to modify the rest of the system. Thus the overall system is open to accept specific improvements relative to the vision system. 6. REFERENCES [1] P. I. Corke Visual control of robot manipulators A review in Visual Servoing K. Hashimoto Ed. Singapore: World Scientific 199 pp11. [] S. Hutchinson G. D. Hager and P. I. Corke A Tutorial on Visual Servo Control in IEEE Transactions on Robotics and Automation Vol 1 no 5 pp. 651670. 1996. [] F. Chaumette P. Rives and B. Espiau Positioning of a robot with respect to an object tracking it and estimating its velocity by visual servoing Proc. 1991 IEEE Int. Conf. on Robotics and Automation 1991 pp. 485 1991. [4] C. E. Smith S. A. Brandt and N. P. Papanikolopoulos EyeInHand Robotic Task In Uncalibrated Environments. IEEE Transactions on Robotics and Automation Vol 1 pp 90914 1997. [5] P. K. Khosla N. P. Papanikolopoulos and T. Kanade Visual tracking of a moving target by a camera mounted on a robot: A combination of control and vision IEEE Trans. Robot. Automat. vol 9 pp. 145 199. [6] W. Wilson C. C. Williams and G. S. Bell Relative endeffector control using cartesian position based visual servoing IEEE Transactions on Robotics and Automation vol 1 nº 5 pp 684696. 1996. [7] P. K. Allen A. Timcenko B. Yoshimi and P. Michelman Automated tracking and grasping of a moving object with a robotic handeye system. IEEE Transactions on Robotics and Automation vol. 9 nº pp. 15165 199. [8] M. Vargas F. R. Rubio and A. R. Malpesa Poseestimation and control in a D visual servoing system. 14 th Triennial World Congress. IFAC pp 17 1999. [9] K. Hashimoto T. Ebine and H. Kimura Visual servoing with hand eye manipulator optimal control approach. IEEE Transactions on Robotics and Automation pp 766774 1996. [10] a. J. Koivo and N. Houshangi Real time vision feedback for servoing robotic manipulator with self tuning controller. IEEE Transactions on Systems Man Cybernetics vol 1 n 1 pp 14141 1991. [11] V. Feliu A Transformation Algorithm for Estimating System Laplace Transform from Sampled Data. IEEE Transactions on Systems Man and Cybernetics pp 16817 1986. [1] J.A. Gangloff M. Mathelin and G. Abba 6 dof high speed dynamic visual servoing using gpc controllers. Proceedings of the 1998 IEEE Int. Conf. on Robotics and Automation pp. 008011998. 546