Control of a Robot Manipulator for Aerospace Applications Antonella Ferrara a, Riccardo Scattolini b a Dipartimento di Informatica e Sistemistica - Università di Pavia, Italy b Dipartimento di Elettronica e Informazione - Politecnico di Milano, Italy Abstract A visual servoing system is presented in the paper. It has been realized by assembling a planar three-degrees-of-freedom robot manipulator with a video camera mounted on its end-effector. The target is a plate with four leds which needs to be kept within the image captured by the camera during the visual servoing experiment. The control variables are the torques generated by the three electric actuators at the joints. The natural flexibilities of the manipulator links are regarded as small disturbances to be counteracted by the feedback control loop. Introduction Visual servoing, that is the use of one or more cameras and a computer vision system to control the position of a robot end-effector, is becoming a usual practice in industrial and medical robotics. Thanks to the improvements in computer performances and in the construction technology of cameras and frame-grabbers, this topic seems ready to be extended to the aerospace context. In this paper, the results of a preliminary study of visual servoing for aerospace application is presented. The proposed scheme is based on a camera-in-hand scheme and on a dynamic look-and-move hierarchical approach, where the acquired image is used to generate the desired trajectory in the operational space. More specifically, the goal is to automatically move a planar three-degrees-of-freedom robot manipulator with a video camera mounted on its end-effector to the correct position with respect to a target object. This latter is a plate with four leds which needs to be kept within the image captured by the camera during the visual servoing experiment. The control variables are the torques generated by the three electric actuators at the joints. The natural flexibilities of the manipulator links can be regarded as small disturbances to be counteracted by the feedback control loop. The experimental results included in the paper show the effectiveness of the considered approach in terms of both performances and robustness. They seem an encouraging premise to future on-board aerospace experiments. The Experimental Setup The experimental apparatus is a planar manipulator with two links and a videocamera as end effector, which can be seen as a further short link. The control goals are: (a)
Figure 1: The experimental system to detect with the camera the relative position of the manipulator with respect to a target made by a plate with four leds mounted according to a square configuration, (b) to steer the arm so as to position the videocamera at a given distance and with a given orientation with respect to the fixed position of the target, or to track its movements. More specifically, the experimental apparatus is depicted in Figure 1, while its physical parameters are summarized in Table 1. The arm is equipped with three brushless motors, the videocamera is a HITACHI KP-F3 with 1/4 CCD sensor and fixed focal length. The operational space, which accounts for the limitations imposed by the physical working space and by the geometry of the manipulator, is synthetically shown in Figure 2. It is assumed that the target can move along a straight vertical line at a distance d target from the manipulator shoulder, while sp min and sp max are the minimum and maximum distance allowed between the camera and the target, and sp, B = d target sp represents the current desired value. These geometrical values, in meters, are: d target = 1.41, sp max = 1.16, sp min = 0.58, sp = 0.83, B = 0.58, and h = 1.62. All the algorithms for image acquisition and controller implementation described in the following have been developed in Matlab/Simulink, equipped with the Real Time Toolbox [1] for real time operations and the Robotic Toolbox [2]. The adopted hardware configuration is a standard Personal Computer equipped with I/O data boards and the PXC200 Framegrabber [3]. Image acquisition and elaboration The algorithms developed for image acquisition have been extensively described in [4]. In synthesis, the acquired images must be elaborated to: (i) extract the position
Meaning Link 1 Link 2 Link 3 a i length of link i [m] 0.49 0.49 0.19 l i position of the link center of mass i [m] 0.058 0.0615 0.094 m li mass of link i [kg] 0.468 0.345 0.398 m mi mass of the joint motor i [kg] 1.448 1.03 0 I li inertia of link i [kg m 2 ] 0.1533 0.2725 0.004 I mi inertia of the joint motor i [kg m 2 ] 8 10 8 2.35 10 8 7 10 7 k ri reduction ratio of the joint motor i 100 100 5 Table 1: Physical parameters of the experimental system Figure 2: The operational space of the manipulator
of the bright points (leds) of the target, (ii) reduce the deformation produced by the optical lenses, (iii) compute the relative position of the camera with respect to the target. These tasks are accomplished according to the following steps: 1. A simple edge detection algorithm has been used to isolate the vertices of the square target. 2. A sphere-filter has been developed to eliminate the spherical deformation of the image, produced by the optical lenses. 3. An optical calculator has been implemented to determine the relative position of the target from the acquired image. This is a classical problem in analytical photogrammetry, see [5], which can be solved by properly formulating and solving a linear algebraic system whose unknowns are the positions of the focal point of the videocamera with respect to the target The overall algorithm has been calibrated with a number of experiments. In all the cases, the relative error has been less than 3%. Modelling and simulation of the manipulator In the preliminary analysis of the system and in the design of the control strategies, a dynamic model of the manipulator, including sensors and actuators characteristics, has been used for direct and inverse kinematics. As it is well known, see, e.g. [6], the model of a rigid planar robot is described by the following set of equations B(q) q+c(q, q) q+f a ( q) = τ (1) where q is the vector of joint displacements, B(q) is the inertia matrix, C(q, q) is the matrix of centripetal and Coriolis torques, τ is the vector of applied joint torques. Finally, F a ( q) represents the friction torques, which have been modelled according to the LuGre model [7] for the first two joints, and with the model proposed in [8] for the joint of the end effector. Image based control Visual servoing control is an active area of research, e.g. [9, 10]. According to [9], two typical configurations can be used. In the first one, usually called eye-in hand, or camera in hand configuration, the camera is mounted on the manipulator end effector. In the second one, the camera is fixed in the workspace. For the long term purposes of this research, that is the use of these results in aerospace applications, the camera in hand configuration turns out to be mandatory.
Figure 3: The reference position, velocity and acceleration A further classification of visual servoing control systems concerns the use of the acquired image in the control problem formulation. Specifically, as described in [11], two operating modes can be followed. The first one, usually referred as dynamic look and move, is a hierarchical structure where the vision system, which can be interpreted as a high level controller, is used to compute the reference signals for the low level controllers moving the joints. In the second one, called direct visual servo, the visual system acts as a servo controller and directly computes the joint inputs. As a matter of fact, the first approach is largely preferred for a number of reasons, see again [9], and is the one adopted in the following. More specifically, the main conceptual steps which have been followed in the definition of the control task are: 1) Image acquisition and computation of the relative position of the arm with respect to the target position in the operational space; 2) computation of the coordinates of the required final (target) position in the operational space; 3) computation, via inverse kinematics, of the final position in the joint space; 4) computation of the reference trajectory given the current position in the joint space; 5) control action. As usual in the industrial practice, Step 4 is performed by considering, for any joint, trajectories in the joint space described by simple second order polynomial functions, with trapezoidal velocity profile, an example of which is shown in Figure 3. In the definition of these trajectories, that is of their final time and maximum velocity, constraints have to be imposed to guarantee a coordination of the motion of all the three joints, so that they (approximately, at least) reach their final position at the same time and with comparable torques provided by the motors.
Figure 4: The link positions during the experiment with the PD controller Control algorithms In the sequel, two control approaches will be introduced, and their use in the visual servoing loop will be discussed. As a preliminary step, the issue of friction compensation is briefly addressed. Friction compensation Preliminary to the definition of the feedback control strategy, and in view of the significant role played by friction in the definition of the overall manipulator dynamics, a simple friction compensation action has been imposed, so that the vector of applied joint torques τ is computed as follows τ(t) = τ (t)+ ˆF a ( q(t)) (2) where ˆF a ( q) is the estimate of the friction torque provided by the adopted friction model, which indeed can be made adaptive by recursively estimating the model parameters, and τ is the additional torque provided by the motors at the joints. In all the performed experiments, the nominal friction model turned out to be sufficiently precise to be used in real time operations without continuous adaptation. PD based feedback control law According to a common practice, the regulator has been designed as a decentralized scheme where three proportional derivative (PD) controllers have been independently
Figure 5: The manipulator trajectory in the working space during the experiment with the PD controller synthesized for the three joint motors neglecting the significant couplings of the system. Hence, the adopted control law is τ (t) = K P e(t)+k D ė(t) (3) where e is the positional error, and K P and K D are positive definite diagonal matrices whose elements have been experimentally determined through simulation and an extensive set of trials. The results achieved in a standard experiment are shown in Figures 4 5. These figures show that the trajectory following is rather satisfactory, although there are some oscillations in the positions due to the drives, which work at a very low regime with respect to their nominal operating mode. Moreover, there is a final offset (see Figure 8a) due to the lack of an integral action in the adopted control law. Adaptive PID feedback control law In order to overcome the drawbacks of the PD based control law, the regulator structure has been enriched by the use of an integral action on the position error. Moreover, as suggested in [12], [13], three additional terms to be adaptively estimated on line have been included to improve the overall performance. The resulting control law, apart from the friction compensation term, is then given by t t τ (t) = K P e(t)+k D ė(t)+k I e(x)dx+a B (t) q d (t) γ 1 f(x)dx (4) 0 where K I is the matrix of integral gains, q d is the vector of the coordinates of the desired trajectory, while A B and f are a matrix and a vector, respectively, which 0
Figure 6: The link positions during the experiment with the adaptive PID controller are recursively updated. The term A B (t) q d (t) is included to provide a feedforward compensation of the manipulator inertia described by matrix B, while the additional integral term depending on f, with γ 1 suitable design parameter, is used to improve the regulator robustness, see [13]. Adaptation of the last two terms of (4) is achieved according to the following laws ḟ(t) = σ 1 f(t)+β 1 (ė(t)+λe(t)) Ȧ B (t) = σ 2 A B (t)+β 2 (ė(t)+λe(t)) q T d (t) (5) where σ 1, σ 2, β 1, and β 2 are the adaptation gains to be suitably tuned. The performances provided by the adaptive PID control law have been tested in an experiment analogous to the one illustrated in Figures 4 5. The results achieved are summarized in Figures 6 7. It is apparent that the performances in terms of trajectory following are better than those obtained with the use of PD regulators. Moreover, the final positional error is significantly reduced (see Figures 7 and 8b). Conclusions A visual servoing system consisting of a planar three degrees of freedom robot manipulator with a video camera mounted on its end effector is presented in the paper. It is controlled so that the end effector reaches a suitable target. A friction compensation action, as well as a couple of control algorithms are presented. The control
(a) PD controller (b) Adaptive PID controller Figure 7: The manipulator motion in the working space during the experiment with two different controllers. Figure 8: The manipulator trajectory in the working space during the experiment with the adaptive PID controller algorithms are easy enough to be compatible with real time application constraints: in fact, the most demanding task in the control loop is image processing. The performances of the controlled systems are experimentally tested, obtaining satisfactory results. The possibility to resort to a direct visual servo, where the visual system acts as a servo controller and directly computes the joint inputs, is under investigation, and will be the object of future works.
Acknowledgements The research has been supported by ASI, Agenzia Spaziale Italiana. References [1] Humusoft s.r.o., Praha, Czech Republic, Real Time Toolbox for use with Matlab - Users s Manual, 1999, Version 3. [2] P.I. Corke, A Robotics Toolbox for Matlab, IEEE Robotics and Automation Magazine, Vol. 3, pp. 24-32, 1996. [3] Imagenation Corporation, Beaverton (OR), USA, PCX 200 Color Frame Grabber - User s Guide, 1997, Version 2. [4] S. Gallone, R. Scattolini, Control of a Flexible Manipulator with a Visual Sensor End Effector, 3rd World Conference on Structural Control, Como, April 2002. [5] K. Kraus, Photogrammetry, Ümmler, Bonn, 1997. [6] B. Siciliano, L. Sciavicco, Modelling and Control of Robot Manipulators, Springer Verlag, 2000. [7] C. Canudas de Wit, H. Olsson, K.J. Astrom and P. Lischinsky, A New Model for Control of Systems with Friction, IEEE Trans. On Automatic Control, Vol. 40, pp. 419-425, 1995. [8] J. Swevers, F.Al-Bender, C.G. Ganseman and T. Prajogo, An integrated friction model structure with improved presliding behaviour for accurate friction compensation, IEEE Transactions on Automatic Control, Vol. 45, pp. 675-686, 2000. [9] P.I. Corke, G.D. Hager, S. Hutchinson, A Tutorial of Visual Servo Control, IEEE Trans. on Robotics and Automation, Vol. 12, pp. 651-670, 1996. [10] R. Kelly, R. Carelli, O. Nasisi, B. Kuchen and F. Reyes, Stable Visual Servoing of Camera-in Hand Robotic Systems, IEEE Trans. on Mechatronics, Vol. 5, pp. 39-48, 2000. [11] A.C. Sanderson and L. E. Weiss, Image-based visual servo control using relational graph error signals, Proc. IEEE, pp. 1074-1077, 1980. [12] R.D. Colbaugh, E. Bassi, F. Benzi and M. Trabatti, Enhancing the trajectory tracking performance capabilities of position controlled manipulators, 2000 IEEE Industry Application Conference, pp. 1170-1177, Rome, 2000. [13] R.D. Colbaugh, K. Glass and H. Seraji, Decentralized adaptive control of manipulators: theory and experiments, Proc. 32nd IEEE Conference on Decision and Control, pp. 153-158, San Antonio, 1993.