On Evolving Fuzzy Modeling for Visual Control of Robotic Manipulators

Similar documents
Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector

Keeping features in the camera s field of view: a visual servoing strategy

A Comparative Study of Prediction of Inverse Kinematics Solution of 2-DOF, 3-DOF and 5-DOF Redundant Manipulators by ANFIS

Image Based Visual Servoing Using Algebraic Curves Applied to Shape Alignment

Task selection for control of active vision systems

ECM A Novel On-line, Evolving Clustering Method and Its Applications

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators

Visual Tracking of a Hand-eye Robot for a Moving Target Object with Multiple Feature Points: Translational Motion Compensation Approach

Survey on Visual Servoing for Manipulation

A New Algorithm for Measuring and Optimizing the Manipulability Index

VISUAL POSITIONING OF A DELTA ROBOT

ÉCOLE POLYTECHNIQUE DE MONTRÉAL

Table of Contents. Chapter 1. Modeling and Identification of Serial Robots... 1 Wisama KHALIL and Etienne DOMBRE

Reinforcement Learning for Appearance Based Visual Servoing in Robotic Manipulation

New shortest-path approaches to visual servoing

Pattern Recognition Methods for Object Boundary Detection

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators

Singularity Handling on Puma in Operational Space Formulation

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction

Introduction to Robotics

An Interactive Technique for Robot Control by Using Image Processing Method

Visual Servoing Utilizing Zoom Mechanism

Tracking of Human Body using Multiple Predictors

PSO based Adaptive Force Controller for 6 DOF Robot Manipulators

Robot Vision Control of robot motion from video. M. Jagersand

LUMS Mine Detector Project

Jacobians. 6.1 Linearized Kinematics. Y: = k2( e6)

6-dof Eye-vergence visual servoing by 1-step GA pose tracking

Applying Neural Network Architecture for Inverse Kinematics Problem in Robotics

Serial Manipulator Statics. Robotics. Serial Manipulator Statics. Vladimír Smutný

Kinematic Control Algorithms for On-Line Obstacle Avoidance for Redundant Manipulators

A Predictive Controller for Object Tracking of a Mobile Robot

A New Algorithm for Measuring and Optimizing the Manipulability Index

Jacobian Learning Methods for Tasks Sequencing in Visual Servoing

Robotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors

Hand-Eye Calibration from Image Derivatives

Force control of redundant industrial robots with an approach for singularity avoidance using extended task space formulation (ETSF)

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

MOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm

A comparison between Position Based and Image Based Visual Servoing on a 3 DOFs translating robot

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

Lecture «Robot Dynamics»: Kinematic Control

Planar Robot Kinematics

Theory of Robotics and Mechatronics

Chapter 1: Introduction

autorob.github.io Inverse Kinematics UM EECS 398/598 - autorob.github.io

VIBRATION ISOLATION USING A MULTI-AXIS ROBOTIC PLATFORM G.

which is shown in Fig We can also show that the plain old Puma cannot reach the point we specified

Operation Trajectory Control of Industrial Robots Based on Motion Simulation

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

Kinematics of Closed Chains

Jacobian: Velocities and Static Forces 1/4

Task analysis based on observing hands and objects by vision

PPGEE Robot Dynamics I

Neuro Fuzzy Controller for Position Control of Robot Arm

Shortest Path Homography-Based Visual Control for Differential Drive Robots

Dynamic Analysis of Manipulator Arm for 6-legged Robot

SCREW-BASED RELATIVE JACOBIAN FOR MANIPULATORS COOPERATING IN A TASK

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions.

Position and Orientation Control of Robot Manipulators Using Dual Quaternion Feedback

Finding Reachable Workspace of a Robotic Manipulator by Edge Detection Algorithm

Intermediate Desired Value Approach for Continuous Transition among Multiple Tasks of Robots

Kinematics of the Stewart Platform (Reality Check 1: page 67)

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

CONTROLO th Portuguese Conference on Automatic Control PREDICTIVE REACHING CONTROL WITH MULTIPLE MOTION MODELS

Intelligent Methods in Modelling and Simulation of Complex Systems

10. Cartesian Trajectory Planning for Robot Manipulators

Robotics 2 Visual servoing

Image Processing Techniques Applied to Problems of Industrial Automation

Triangulation: A new algorithm for Inverse Kinematics

Self-Calibration of a Camera Equipped SCORBOT ER-4u Robot

KINEMATIC IDENTIFICATION OF PARALLEL MECHANISMS BY A DIVIDE AND CONQUER STRATEGY

1. Introduction 1 2. Mathematical Representation of Robots

Design and Optimization of the Thigh for an Exoskeleton based on Parallel Mechanism

Announcements. Stereo

Arduino Based Planar Two DoF Robot Manipulator

Prediction of Inverse Kinematics Solution of APUMA Manipulator Using ANFIS

Autonomous Mobile Robot Design

output vector controller y s... y 2 y 1

1724. Mobile manipulators collision-free trajectory planning with regard to end-effector vibrations elimination

Trajectory Inverse Kinematics By Conditional Density Models

Visual Servoing for Floppy Robots Using LWPR

Dynamics modeling of structure-varying kinematic chains for free-flying robots

A simple example. Assume we want to find the change in the rotation angles to get the end effector to G. Effect of changing s

Automatic Control Industrial robotics

Robot Calibration for Precise Ultrasound Image Acquisition

Cancer Biology 2017;7(3) A New Method for Position Control of a 2-DOF Robot Arm Using Neuro Fuzzy Controller

Vision Review: Image Formation. Course web page:

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

An Efficient Method for Solving the Direct Kinematics of Parallel Manipulators Following a Trajectory

Humanoid Robotics. Least Squares. Maren Bennewitz

Kinematics and dynamics analysis of micro-robot for surgical applications

INSTITUTE OF AERONAUTICAL ENGINEERING

Extension of Usable Workspace of Rotational Axes in Robot Planning

Camera calibration. Robotic vision. Ville Kyrki

Takagi-Sugeno Fuzzy System Accuracy Improvement with A Two Stage Tuning

AC : ADAPTIVE ROBOT MANIPULATORS IN GLOBAL TECHNOLOGY

Transcription:

On Evolving Fuzzy Modeling for Visual Control of Robotic Manipulators P.J.S. Gonçalves 1,2, P.M.B. Torres 2, J.R. Caldas Pinto 1, J.M.C. Sousa 1 1. Instituto Politécnico de Castelo Branco, Escola Superior de Tecnologia Av. Empresário, 6000-767 Castelo Branco, Portugal 2. IDMEC/IST, Technical University of Lisbon (TU Lisbon) Av. Rovisco Pais, 1049-001 Lisboa, Portugal Email: {paulo.goncalves,pedrotorres}@ipcb.pt, {j.sousa,jcpinto}@dem.ist.utl.pt Abstract In this paper, evolving takagi-sugeno fuzzy models are applied to model a dynamic system, e.g. a six degrees of freedom robotic manipulator. In recent years the dynamic model was obtained off-line, but when the robot performs tasks away from the modeled workspace, it fails in achieve the goal. Then, the models must evolve when the robot moves to a previously un-modeled workspace. The models were obtained online for uncalibrated visual servoing in 3D workspace. This approach will recursively update the inverse fuzzy models based only on measurements at a given time instant. The uncalibrated approach does not require calibrated kinematic and camera models, as needed in classical visual servoing to obtain the Jacobian. Experimental results obtained in a PUMA robot performing eye-to-hand visual servoing in 3D workspace are used to demonstrate the validity of the proposed approach, when compared to the previous developed off-line learning. 1. Introduction Industrial robotic manipulators use sensor-based control to perform tasks, either in structured or unstructured environments. Vision sensors provide a vast amount of information on the environment in which robots move. Thus, vision is essential for robots working in unstructured environments. This type of sensors definitely enlarges the potential applications of the actual robotic manipulators. Visual servoing can be defined as a method to control dynamic systems using the information provided by visual sensors. In this paper, the control of a robot manipulator end-effector using a single camera looking to the robot (eye-to-hand) is addressed [1,2]. Early approaches for visual servoing are based on a model already known, i.e. the robot-camera model, from which the relation between the features and the robot

2 kinematics is analytically obtained [3]. Apart from the stated approaches where the robot-camera model is already known, it can also be estimated [4,3]. This type of systems, Uncalibrated Visual Servoing, can deal with unknown robot kinematics and/or unknown camera parameters. By using this type of robot-camera models, the system becomes independent of robot type, camera type or even camera location. In this paper, the robot-camera model estimation, from motions in the 3D workspace, by learning is addressed both on-line and off-line using fuzzy techniques to obtain a controller capable of controlling the system. An inverse fuzzy model is used to derive the inverse robot-camera model, i.e. the Jacobian, in order to compute the joints and end-effector velocities in a straightforward manner. The inverse fuzzy model can be applied directly as a controller, which is a simple way to implement a controller in real-time. Note that this feature is very important in robotic systems. From the modeling techniques based on soft computing, fuzzy modeling is one of the most appealing. As the robotic manipulator together with the visual system is a nonlinear system, which is only partly known, it is advantageous to use fuzzy modeling as a way to derive a model (or an inverse model as in this case) based only on measurements. In this paper will be applied and compared two approaches, off-line [3], and on-line [5,6]. The first approach have already proven to perform well in visual servoing systems. It's main drawback is that the learning must be performed off-line. Since the environment in which the robot operates can vary importantly, new models should be derived very often, which take time and also the robot must stop the task that is performing. That is why an on-line modeling of the visual servoing system is preferable. This approach was not yet applied to control robotic manipulators using vision and is the objective of the present paper. The paper is organized as follows. Section 2 describes briefly the concept of visual servoing and presents the uncalibrated visual servoing approach. Section 3 presents the evolving (on-line) fuzzy modeling, and discusses the problem of identifying inverse fuzzy models directly from data measurements. Section 4 presents the obtained results, where the identified inverse fuzzy models are discussed. Finally, Section 5 presents the conclusions and the future work. 2. Visual Servoing Machine Vision and Robotics can be used together to control a robotic manipulator. This type of control, which is defined as Visual Servoing, uses visual information from the work environment to control a robotic manipulator performing a given task. In the following sub-sections classical and uncalibrated visual servoing are presented, along with their main goals.

2.1. Classical Visual Servoing 3 Within the visual servoing framework [1,2], there are three main categories related with the information obtained from the image features: image-based visual servoing, position-based visual servoing, hybrid visual servoing. In the following, the classical approaches to modeling and control of position-based visual servoing are presented. 2.1.1. Modeling the Position-Based Visual Servoing System In this paper position-based visual servoing with 3D features [7] is used in an eye-to-hand system [1,2], where the camera is fixed and looking the robot and object. The 3D image features, s are 3D points of the object in the camera frame, p. The kinematic modeling of the transformation between the image features velocities, and the joints velocities is defined as follows [7]: (2.1) where I 3 is the 3x3 identity matrix, S(p) is the skew-symmetric matrix of the 3D point p, c W e is defined as the transformation between the camera and endeffector frames velocities, and e J R is the robot Jacobian. The 3D point is obtained from the captured image using a pose estimation algorithm [7]. 2.2. Uncalibrated Visual Servoing To derive an accurate Jacobian, J, a perfect modeling of the camera, the chosen image features, the position of the camera related to the world, and the depth of the target related to the camera frame must be accurately determined. Even when a perfect model of the Jacobian is available, it can contain singularities, which hampers the application of a control law. Remind that the Jacobian must be inverted to send the camera velocity to the robot inner control loop. When the Jacobian is singular, the control cannot be correctly performed. There are visual servoing systems that obviate the calibration step and estimate the robot-camera model either on line or off line. The robot-camera model may be estimated: analytically, using nonlinear least square optimization [8], and by learning or training, using fuzzy membership functions and neural networks [9,3].

4 In addition, the control system may estimate an image Jacobian and use the known robot model, or a coupled robot-camera Jacobian may be estimated. To overcome the difficulties regarding the Jacobian, a new type of differential relationship between the features and camera velocities was proposed in [9] This approach states that the joint variation depends on the image features variation and the previous position of the robot manipulator: (2.2) In image-based visual servoing, the goal is to obtain a joint velocity, δq(k), capable of driving the robot according to a desired image feature position, s(k+1), with an also desired image feature error, δs(k+1), from any position in the joint spaces. This goal can be accomplished by modeling the inverse function (F k ) -1. using inverse fuzzy modeling as proposed in this paper and presented in Section 3. This new approach to image-based visual servoing allows to overcome the problems stated previously regarding the Jacobian and the calibration of the robotcamera model. 3. Evolving Fuzzy Modeling The model obtained from previous techniques [3,4], using takagi-sugeno fuzzy models, is assumed to be fixed, since it is learned in off-line mode. Recently attention is focused in on-line learning [5], where in a first phase input-output data is partitioned using unsupervised clustering methods and in a second phase, parameter identification is performed using a supervised learning method. In On-Line Fuzzy Modeling and according to [5], also rule-based models of the TS type, are considered. Typically in the affine form where the input-output data is acquired continuously. The new data, arriving at some time instant, can bring new information from the system, which could indicate a change in its dynamics. This information may change an existing rule, by changing the spread of the membership functions, or even introduce a new one. To achieve this, the algorithm must be able to judge the informative potential and the importance of the new data. In the following are briefly presented the several steps of the algorithm used for on-line fuzzy modeling, proposed in [5,6], evolving fuzzy systems. The first step is based on the subtractive clustering algorithm, where the input-output data is partitioned. The procedure used must be initialized, i.e. the focal point of the first cluster is equal to the first data point and its potential is equal to one. Starting from the first data point, the potential of the next data point is calculated recursively using a Cauchy type function of first order:

5 (2.3) where P k (z k ) denotes the potential of the data point z k calculated at time k; d ik j = z i j - z k j, denotes projection of the distance between two data points (z i j and z k j ) on the axis z j. When a new data point arrives it also influences the potential of the already defined center of the clusters z i *, i=1,2,...,k). A recursive formula for the update of the cluster centers potential is defined in [5]: (2.4) where P k (z i * ) is the potential at time k of the cluster center, related to the rule i. The next step of the algorithm is to compare the potential of the actual data point to the potentials of the existing cluster centers. According to [5,6] a crucial part of the evolving fuzzy systems is the rule creation and modification. If the potential of a new data point is higher than the potential of the existing cluster centers, then the new data point is accepted as a new cluster center and a new rule is formed. If in addition to the previous condition the new data point is close to an old cluster center, the old cluster center is replaced. This last condition is defined by: (2.5) where r [0.3;0.5] is the spread of the antecedent [5], and δ min is the shortest distance between the new candidate z k and all the existing cluster centers z i *. The fuzzy sets in the antecedent of the rules are gaussian, with the form: (2.5) The consequents of the fuzzy rules are obtained using the global parameter estimation procedure based on the weighted recursive least squares, presented in [5].

6 3.1. Inverse modeling For the robotic application in this paper, the inverse model is identified using input-output data from the inputs and the outputs δs(k+1), following the procedure described in [3]. In this paper, the approach presented in [4]to obtain the training set, was used. Note that we are interested in the identification of an inverse model as in (2.2). 4. Experimental Results 4.1. Mackey-Glass time series The proposed approach in [5,6] was also tested in the Mackey-Glass time series, given in the following equation: =. 0.1 (4.1) The system was defined for four inputs and one output. inputs= x t 18 ;x t 12 ;x t 6 ;x t and output= x t+85. The output will be the predicted value. For the training set was chosen 500 points, from t = 50001 to t = 5500. A fuzzy model with 67 rules was obtained with Ω=500. In Fig. 4.1 are depicted the rules location in the time series and the potential of the clusters, related to the system output. After an initial phase were a large amount of rules were formed, the number of rules created diminishes. The model was then tested and the results are presented in Table 4.1 and also depicted in Fig. 4.2. Table 4.1. VAF and MSE for the Mackey-Glass time series. Number of rules VAF MSE 67 98,97% 6,2212e -4

7 1 Potential Evolution Potential y[n] Potential of the Cluster output 0.5 output 0 0 50 100 150 200 250 300 350 400 450 500 n Rules Location 1 Potencial 0.8 0.6 New Rules 0.4 0 50 100 150 200 250 300 350 400 450 500 n Fig. 4.1: System output versus cluster potential and new rules location. 1 Model Output y[n] 0.9 0.8 0.7 0.6 y[n] 0.5 0.4 0.3 0.2 0.1 0 0 50 100 150 200 250 300 350 400 450 500 n Fig. 4.2: System output (blue) versus fuzzy model output (red).

8 4.2. Robotic Manipulator A Puma 560 Robotic Manipulator and a Vector CCi4 camera (see Fig. 4.3), in eye-to-hand configuration, were used to demonstrate the validity of the approach proposed in this paper. The experimental setup is presented in Fig. 4.3 [3,4]; not shown in Fig.4.3 are the robot controller for the joints control, the PC for the image processing, the PC that run the visual control algorithms and a third PC for algorithm design. The systems integration is presented in [11].The robot moves in the 3D space, but only joints 1,2 and 3 move, i.e. the 3 joints of the wrist do not move. The visual control algorithms were implemented in real-time using MatLab Simulink, and the xpc Target toolbox. The robot inner loop velocity control, performs at 1 KHz and the visual loop control at 12.5 Hz. A planar object is rigidly attached to the robot end-effector and is described with eight points, their centroid coordinates are used as the two image features needed to obtain the model in the plane. The rotation angle of joint 5 is also used as image feature, and can be easily obtained from the object since it is rigidly attached to the robot end-effector [3]. Fig. 4.3: The robot Manipulator and the Camera.

Following the work in [4], to obtain the identification data, the robot swept the 3D workspace in the camera field of view in a 3D spiral path, starting in the spiral center, which allows to obtain the model for the end-effector position. The inverse fuzzy model was identified from the 3D spiral trajectory data, i.e. the joint velocities (4.2) (see Fig. 4.5) and the sixteen image features obtained (4.3) (see Fig. 4.4) using (14). = 9 (4.2) +1 = +1 (4.3) Fig. 4.4: Input data for the fuzzy model, (x and y coordinates of the image features, shown in different colours). Fig. 4.5: System output (blue) versus fuzzy model output (magenta).

10 Three fuzzy models with 50 rules each were obtained with the spread, r=0.4. In Fig. 4.5 and table 4.2 are depicted the results of the application of the evolving approach for visual servoing. The results obtained are very similar to the results obtained from the off-line approach. The VAF only decreases 0.1% [4]. The major drawback of the evolving approach is the large number of rules. In the off-line approach only four rules were needed to obtain the same VAF. Table 4.2. VAF and MSE for the robot manipulator. Robot Joint 1 Robot Joint 2 Robot Joint 3 Rules VAF Rules VAF Rules VAF 50 95.6% 50 98.4% 50 98.5% 5. Conclusions and Future Work This paper introduces a novel contribution to eye-to-hand visual servoing, based on on-line fuzzy modeling to obtain an uncalibrated visual servoing system. The method presented have been validated by the results, in a 3D workspace, shown in the paper. With the robot-camera model being able to adapt on-line to changes in the environment, the robot can now operate on different conditions without the need for re-learning off-line the model or perform any kind of calibration. The evolving (on-line modeling) results are not better in precision when compared with the off-line approach [4], because the later takes into account all the data points. In on-line modeling, only the actual data point and the defined cluster centers are used to update the model, introducing or replacing rules. That is why the on-line approach performs not as good as the offline approach. However we are convicted that the differences in the models, will not compromise the use of the model to control the robot. As future work, the proposed evolving inverse fuzzy model approach to visual servoing will be extended to the tri-dimensional robot workspace and also will be tested to control the PUMA Robotic Manipulator, in order to verify the results already achieved for the inverse model obtained off-line. The trade-off between accuracy (VAF/MSE) and computational effort (number of rules), when controlling the robot must also be tackled. Acknowledgments This work was funded by FCT trough "Programa POCI2010, Unidade46", subsidized by FEDER.

References 11 [1] F. Chaumette, S. Hutchinson. (2006) Visual servo control, Part I: Basic approaches IEEE Robotics and Automation Magazine, 13(4):82-90. [2] F. Chaumette, S. Hutchinson. (2007) Visual servo control, Part II: Advanced approaches IEEE Robotics and Automation Magazine, 14(1):109-118. [3] P.J.S. Gonçalves, L.F. Mendonça, J.M.C. Sousa, J.R. Caldas Pinto. (2008) "Uncalibrated Eye-to-Hand Visual Servoing using Inverse Fuzzy Models", IEEE Transactions on Fuzzy Systems, 16(2):341-353. [4] P.J.S. Gonçalves, A. Paris, C. Christo, J.M.C. Sousa, J.R. Caldas Pinto, (2006) "Uncalibrated Visual Servoing in 3D Workspace", Image Analysis and Recognition, Lecture Notes in Computer Science (4142):225 236. [5] P. Angelov and D. Filev. (2004). An approach to online identification of takagi-sugeno fuzzy models. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 34:484 498. [6] P. Angelov, Xiaowei Zhou (2008), Evolving Fuzzy-Rule-Based Classifiers From Data Streams IEEE Transactions on Fuzzy Systems 16(6): 1462-1475. [7] E. Cervera and P. Martinet. (1999) Combining pixel and depth information in image-based visual servoing. In Proceedings of the Ninth International Conference on Advanced Robotics, pages 445 450, Tokyo, Japan. [8] J. Peipmeier, G. McMurray, and H. Lipkin. (2004) Uncalibrated dynamic visual servoing. IEEE Trans. on Robotics and Automation, 20(1):143 147. [9] I.H. Suh and T.W. Kim. (1994) Fuzzy membership function based neural networks with applications to the visual servoing of robot manipulators. IEEE Transactions on Fuzzy Systems, 2(3):203 220. [10] S. L. Chiu. (1994) Fuzzy model identification based on cluster estimation. Journal of Intelligent Fuzzy Systems, 2:267 278. [11] P. M. R. Morgado, J. R. Caldas Pinto, J. M. M. Martins and P. J. S. Gonçalves (2009). Cooperative Eye-in-hand/Stereo Eye-to-hand Visual Servoing, Proc. of RecPad 2009-15ª Conferência Portuguesa de Reconhecimento de Padrões, Aveiro, Portugal.