Designing Radial Basis Neural Networks using a Distributed Architecture

Similar documents
IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS

Radial Basis Function Neural Network Classifier

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Improvement of a car racing controller by means of Ant Colony Optimization algorithms

A Lazy Approach for Machine Learning Algorithms

Function Approximation Using Artificial Neural Networks

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

CHAPTER IX Radial Basis Function Networks

COMPUTATIONAL INTELLIGENCE

Multiobjective RBFNNs Designer for Function Approximation: An Application for Mineral Reduction

Improving Performance of Multi-objective Genetic for Function Approximation through island specialisation

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

Adaptive Training of Radial Basis Function Networks Based on Cooperative Evolution and Evolutionary Programming

CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks

A Massively-Parallel SIMD Processor for Neural Network and Machine Vision Applications

Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design

Large Data Analysis via Interpolation of Functions: Interpolating Polynomials vs Artificial Neural Networks

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

Automatic Programming with Ant Colony Optimization

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Evolvable Hardware for Generalized Neural Networks

Width optimization of the Gaussian kernels in Radial Basis Function Networks

Function approximation using RBF network. 10 basis functions and 25 data points.

Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms.

Investigating the Application of Genetic Programming to Function Approximation

Transactions on Information and Communications Technologies vol 15, 1997 WIT Press, ISSN

THREE PHASE FAULT DIAGNOSIS BASED ON RBF NEURAL NETWORK OPTIMIZED BY PSO ALGORITHM

Feature selection in environmental data mining combining Simulated Annealing and Extreme Learning Machine

Strategy for Individuals Distribution by Incident Nodes Participation in Star Topology of Distributed Evolutionary Algorithms

RESPONSE SURFACE METHODOLOGIES - METAMODELS

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Face Detection Using Radial Basis Function Neural Networks With Fixed Spread Value

Neural Network Weight Selection Using Genetic Algorithms

1 Lab + Hwk 5: Particle Swarm Optimization

Evolutionary Mobile Agents

Observational Learning with Modular Networks

An Analog VLSI Chip for Radial Basis Functions

Using a genetic algorithm for editing k-nearest neighbor classifiers

1 Lab + Hwk 5: Particle Swarm Optimization

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Hybrid Training Algorithm for RBF Network

1 Lab 5: Particle Swarm Optimization

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

Gauss-Sigmoid Neural Network

Accurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF Networks

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

A Structural Optimization Method of Genetic Network Programming. for Enhancing Generalization Ability

Transactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN

Artificial Neural Network-Based Prediction of Human Posture

European Journal of Science and Engineering Vol. 1, Issue 1, 2013 ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM IDENTIFICATION OF AN INDUCTION MOTOR

Hierarchical Learning Algorithm for the Beta Basis Function Neural Network

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

Neuro-Fuzzy Inverse Forward Models

Face Detection Using Radial Basis Function Neural Networks with Fixed Spread Value

DEVELOPMENT OF NEURAL NETWORK TRAINING METHODOLOGY FOR MODELING NONLINEAR SYSTEMS WITH APPLICATION TO THE PREDICTION OF THE REFRACTIVE INDEX

9. Lecture Neural Networks

Ensembles of Neural Networks for Forecasting of Time Series of Spacecraft Telemetry

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Reification of Boolean Logic

Performance Evaluation of a Radial Basis Function Neural Network Learning Algorithm for Function Approximation.

Image Compression: An Artificial Neural Network Approach

Ultrasonic Robot Eye for Shape Recognition Employing a Genetic Algorithm

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?)

Control Design Tool for Algebraic Digital Controllers

Inductive Sorting-Out GMDH Algorithms with Polynomial Complexity for Active Neurons of Neural Network

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION

Simulation of Back Propagation Neural Network for Iris Flower Classification

Dynamic Analysis of Structures Using Neural Networks

EVOLVING GENERALIZED EUCLIDEAN DISTANCES FOR TRAINING RBNN

Linear Models. Lecture Outline: Numeric Prediction: Linear Regression. Linear Classification. The Perceptron. Support Vector Machines

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

Genetic programming. Lecture Genetic Programming. LISP as a GP language. LISP structure. S-expressions

ANGE Automatic Neural Generator

Identification of Multisensor Conversion Characteristic Using Neural Networks

NEURAL NETWORK BASED REGRESSION TESTING

Introduction to ANSYS DesignXplorer

A New Approach to Execution Time Estimations in a Hardware/Software Codesign Environment

Optimal Brain Damage. Yann Le Cun, John S. Denker and Sara A. Solla. presented by Chaitanya Polumetla

Packet Switching Networks Traffic Prediction Based on Radial Basis Function Neural Network

The Prediction of Real estate Price Index based on Improved Neural Network Algorithm

Interpolation possibilities using rational B-spline curve

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Learning Social Graph Topologies using Generative Adversarial Neural Networks

A Modified Spline Interpolation Method for Function Reconstruction from Its Zero-Crossings

A Class of Instantaneously Trained Neural Networks

Automatic visual recognition for metro surveillance

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

PERFORMANCE OF GRID COMPUTING FOR DISTRIBUTED NEURAL NETWORK. Submitted By:Mohnish Malviya & Suny Shekher Pankaj [CSE,7 TH SEM]

A neural network that classifies glass either as window or non-window depending on the glass chemistry.

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

A grid representation for Distributed Virtual Environments

Evolving Radial Basis Function Networks via GP for Estimating Fitness Values using Surrogate Models

Biometric verification of computer users with probabilistic and cascade forward neural networks

Multi-Layer Perceptrons with B-SpIine Receptive Field Functions

Artificial Neural Network based Curve Prediction

Neurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task

Nonlinear Image Interpolation using Manifold Learning

Transcription:

Designing Radial Basis Neural Networks using a Distributed Architecture J.M. Valls, A. Leal, I.M. Galván and J.M. Molina Computer Science Department Carlos III University of Madrid Avenida de la Universidad, 30. 28911 Leganés SPAIN Abstract: - Radial Basis Neural (RBN) network has the power of the universal approximation function and the convergence of that networks is very fast compared to multilayer feedforward neural networks. However, how to determine the architecture of the RBN networks to solve a given problem is not straightforward. In addition, the number of hidden units allocated in a RBN network seems to be a critical factor in the performance of these networks. In this work, the design of RBN network is based on the cooperation of n+m agents: n RBN agents and m manager agents. The n+m agents are organised in a Multiagent System. The training process is distributed among the n RBN agents each one with a different number of neurons. Each agent executes a number of training cycles, a stage, when the manager sends it a message. The m manager agents have in charge to control the evolution of each problem. Each manager agent controls one problem. The whole process is governed by manager agents, each one decides the best RBN agent in each stage for each problem. Keywords: - Multiagent Systems, Distributed Systems, Radial Basis Neural Networks. 1 Introduction RBN networks [1,2] are originated from the use of radial basis functions, as the Gaussian functions, in the solution of the real multivariate interpolation problem. RBN networks are composed of a single hidden layer with local activation functions distributed in different neighborhoods of the input space. The input layer is simply a set of sensory units, which transmits with out connections the external input. The second layer is the hidden layer, which applies a non-linear transformation of the input space. The third and final layer combines linearly the outputs of the hidden units. The significance of this is that RBN networks construct local input-output approximation, which allows obtain a very fast convergence of the learning algorithm [1,3]. Despite these advantages RBN networks are not as widely used, as they should be. One of the main reasons seems to be that it is not straightforward to design an appropriate RBN network to solve a given problem. The design of a successful RBN network involves determine the topology, this is the number of radial basis units or hidden units, which is a critical factor in the performance of these networks. The design of RBN networks is generally carried out using a tedious trail and error process, this is, different architectures of RBN networks varying the number of hidden units are complete trained. From those, the topology being able to obtain the best performance of error is chosen as the definitive topology to solve the problem. That trail and error process implies a large computational cost, in terms of learning cycles, because the learning process must be completed for each RBN network architecture. The goal of this paper is to avoid a trail and error process to obtain an adequate RBN network. With this purpose, an automatic process is proposed. In last years, the interest of the scientific community to develop methods or algorithms to design automatically appropriate topologies of artificial neural networks has increased. The main attention in this field has been paid to the multilayer feedforward neural networks, for instance [4,5,6]. Only a few works concerning to determine the architecture of RBN network appear in the literature. The main research line in this field is focused on use evolutionary techniques, as genetic algorithm, to determine the optimal number of hidden units in a RBN network [7, 8, 9]. It has been shown that those strategies can be successful in some applications. However, they generally imply a large computational cost because a large set of RBN network must be trained. In this work, the proposed automatic system to determine an appropriate RBN network for a given

problem is based on the cooperation of different agents which are organized in a Multiagent System. The proposed system is composed of RBN agents and manager agents organized in a Multiagent architecture [10, 11, 12]. The RBN agents are RBN networks with different topology, that is, with different number of hidden neurons. Each Manager Agent receives information about the evolution of the training process in each RBN agent for a problem. Each Manager Agent decides what topology is the most suitable to solve its problem in terms of being able to reach the minimum error in the minor number of learning cycles. In this way, in the Multiagent System, only one RBN agent (that defines a particular topology) is training at a time for a problem and the training process is distributed among the different agents. The number of learning cycles realized by the proposed system until finding the most adequate RBN network is measured. In order to show the effectiveness of the Multiagent System, the number of learning cycles that would involve a trail and error process is also measured. In this case, the trail and error mechanism is organized as a sequential procedure without the Manager Agent. The Multiagent system allows to reduce the number of training cycles of all RBN networks compared with a sequential training process. In addition, the proposed Multiagent system is a general architecture that could be useful to execute several tasks. Although in this work it is only used in the training task for a given problem, the Multiagent system can be also used for different tasks. For instance, agents could execute different training tasks (each task for each problem), or test tasks: at the same time that some networks are training, tests could be carried out by other networks. 3 Multiagent System Architecture The multiagent system proposed in this paper is designed in order to find out the optimal topology of the radial basis neural networks being able to solve several problems at the same time. The system is composed of two types of agents: RBN agents and Manager Agents (MA). RBN agents are able to execute a RBN network. Each RBN agent could train a topology (different for each agent), that is, a RBN with a pre-defined number of hidden neurons. There not exists communication among RBN agents; they only communicate to manager agents. MA agents are able to solve a real problem using RBN agents. The initial definition of this problem and the function to evaluate the learning process of each RBN is known by each MA. Each MA has knowledge about all RBN agents. Using that knowledge, each manager agent decides which is the best RBN agent at the end of each stage, that is, which one is the most appropriate to solve the problem. In the next, the tasks of the agents involved in the Multiagent System and their architecture are described. The responsibilities of a manager agent are the following: Send the initial information of the problem: number of cycles, credit evaluation function, etc. Data collection from RBN agents. Monitoring the evolution of the problem, in order to ensure that the system goals are achieved. Choosing the RBN agent with the best performance, that is with the best credit value associated, which it is explained in section 3.1.1. On the other hand, a RBN agent has to carry out the following tasks, for each problem: Establishing the communication to the manager agent (receiving the initial information). Executing a fixed number of learning cycles when is indicated by the manager agent, the number of cycles is named a stage (this information is received initially). Evaluating the credit value associated to its radial basis neural network when the number of learning cycles is reached, in order to decide the optimal architecture (see section 3.1.1). The credit evaluation function used depends on the problem and is received as initial information. Checking whether the goal has been reached. Checking when the training process must be stopped. 3.2 RBN Agents Architecture As it is shown in figure 1, RBN agents have three different modules: a) Communications Module This module performs the communications to manager agents using a simple protocol following the KQML one, which is described in section 3.3. When several MA send a message in order to execute the RBN for its problem, this module decides as function of the evolution of results, the priority of petitions. b) Control Module The control module stores information about the training state of the RBN network for each problem. That information is useful to measure the goodness of the neural network to solve the given problem. The number of learning cycles accomplished until the current instant in order to measure the

computational effort involved. The mean square error reached until the current instant to measure the goodness of the RBN network. The derivative of the mean square error respect to the number of learning cycles to measure the speed of the convergence. The mean square error desired to solve the problem. The credit value associated to the RBN network. On the other hand, the control module performs the following operations: Starting a training stage. If the goal is achieved, the training is stopped. Computing the credit value, as it is shown in 3.1.1., using the credit evaluation function sent by the corresponding MA. c) Low Level Module (NN Module) The low level module is the radial basis neural networks in the strict sense. It receives the training input patterns and produces the output network. NN Output Trainig Patterns To Manager Agent Comunication Module Control Module Low Level Module RBN Agent From Manager Agent To RBN Agents Figure 1: Agents Architecture Comunication Module Control Module Manager Agent From RBN Agents 3.1.1 Credit Evaluation Function One of the most important aspects of the Multiagent System proposed in this work is the way to assign the credit value to each RBN agent. The manager agent uses that value to decide which RBN agent must start a new training stage. Hence, the result of the Multiagent System depends widely on the credit value chosen. The simplest function, that MA could send, uses the number of the learning cycles (n) and the mean square error (E) to evaluate the credit value because they are the main parameters to reflect the goodness of a RBN network. In this work the credit function selected is the following: 1 C = ne Several considerations have been taken into account, which are briefly described in the next: the smaller the error is, the better the RBN network will behave. So, C should be inversely proportional to E. the sooner the agent achieves a small error, the better. So, C should be inversely proportional to the number of training cycles n. In any case, if the derivative of the mean square error is very small, and this error is bigger than the goal, the value assigned to C is -1. This means that this agent is discarded. 3.2 Manager Agent Architecture As it is shown in figure 1, the manager agent has two different modules a) Communication module This module performs the communication to RBN agents. If all RBN agents are able to solve the problem, the protocol followed is explained in section 3.3. When the RBN agent rejects a ORDER, the communication module selects the next RBN agent to perform the learning stage. b) Control module Stores the information about the credit value of each RBN Agent. Decides which of the RBN agents is the best. 3.3 Communication Protocol A manager agent (M.A.) initiates the process. Its communication module generates a REQUEST.ORDER (Init, GP) act, and a message is sent to each RBN agent (R.A.). GP is a data structure containing the general parameters of the system: Epsilon (Goal Error), MaxCycles (Max. number of training cycles), N (number of training cycles in each training stage) and the credit evaluation function. Each R.A. acknowledges this message generating an ACCEPT (N j ) act which, when processed, sends an answer message to the manager agent, and starts the initial training stage. If the R.A. is not ready to perform this task, generates a REJECT (N j ) act which, when processed, sends a message to the M.A. The M.A. waits to have all the answers from the R.A. From now on, the communication is established among the M.A. and the R.As which accepted the first message. When the R.A's finish their initial training stage, their control module computes the credit value, and sends it to the communication module which generates an INFORM (N i, C) act which produces a message to the M.A. The M.A. control module writes the value of C

on a table indexed by the RBN Agent number (Credit Table). REQUEST.ORDER(Init,GP) SUPPLY_INFO(Nj,Cj) REQUEST.ORDER(Train,Ni) REQUEST.ORDER(Train,Nk) SUPPLY_INFO(Success, Status,Nk) REQUEST.ORDER(Stop,Nk) SUPPLY_INFO(Nk, -1) REQUEST.ORDER(Kill,Fail) Manager Agent... <MA,{Nj},R.O(Init,GP)> <Nj,MA, A(Nj)> <Nj,MA, I(Nj,Cj)> <MA,Ni,R.O(Train,Ni)> <Ni,MA, A(Ni)> <Ni,MA, I(Ni,Ci)> <Nk,MA, A(Nk)> <Nk,MA, I(Success,Status,Nk)> <MA,{Nj},R.O(Stop,Nk)>.. <MA,Nk,R.O(Train,Nk)> <Nk,MA, A(Nk)> <Nk,MA, I(Nk,-1)> <MA,{Nj},R.O(Kill,Fail)> RBN Agent REQUEST-TO-DO.ORDER(Init) ACCEPT(Nj) INFORM(Nj, Cj) REQUEST-TO-DO.ORDER(Train) ACCEPT(Ni) INFORM(Ni, Ci) REQUEST-TO-DO.ORDER(Train) ACCEPT(Nk) INFORM(Success,Status,Nk) REQUEST-TO-DO.ORDER(Stop) REQUEST-TO-DO.ORDER(Train) ACCEPT(Nk) INFORM(Nk, -1) REQUEST-TO-DO.ORDER(Kill) Figure 2: Communication Protocol Once the M.A. has received the messages from all the R.A., selects the best value from the credit table, takes its index j, and generates a REQUEST.ORDER (Train,Nj) act which produces a message sent to the j th R.A. This Agent, acknowledges it with an ACCEPT act, and starts a new training stage. If this agent is working in another problem, the agent sends a REJECT act and the MA selects the next best agent. When the MA receives an ACCEPT message means that the RBN agents is running a stage of learning and the result will be received later. Meanwhile, the M.A. stays on a wait state, until an INFORM(N j,c) message from the j th agent is received. When this happens, the M.A. writes the new value of C on the table, and, again, selects the best value from the credit table, takes its index j, and generates a REQUEST.ORDER (Train,Nj). The negotiation stops when: 1. All the credit values are -1, so none of R.A's are suitable. In this case, the M.A. communication module generates a REQUEST.ORDER (Kill,Fail) act, which, when processed, sends a message to every R.A., to stop the system. The result is that none of the agents is able to perform the task. 2. A R.A (the k th )reaches an error E that E K <= E goal. In this case, this agent stops its training stage, and generates an INFORM (Success,Status,Nk) act which produces the respective message to the M.A. which generates a REQUEST.ORDER(Success,Nk) act that implies a message to be sent to all the R.A's to stop the system, informing that the goal has been achieved by the k th agent. 4 Experimental Results The Multiagent System proposed in work has been applied to two different approximation domains: the Hermite polynomial and a piecewise-defined function. The aim for each experimental case is to find the most appropriate topology of RBN network, which is measured in terms to find the RBN topology being able to reach the minimum error in the minor number of learning cycles. In this work, several simulations varying the minimum mean square error that must be reached have been carried out. With this purpose, a set of RBN agents with different number of hidden neurons is established for each domain In addition, for each domain the number of learning cycles involved in the Multiagent System has been compared with the number of learning cycles required when all RBN networks are simultaneously trained. 4.1 Hermite Polynomial The Hermite polynomial is given by the following equation: 2 1 2 f(x) = 1.1( 1 x + 2x ) exp x 2 A random sampling of the interval [-4, 4] is used in obtaining 100 input-output points for the training set. Data are normalized in the interval [0,1]. In this case, 10 RBN agents -with 5, 10, 15, 20, 25, 30, 35, 40, 45 and 50 hidden neurons- have been used by the Multiagent system. The most appropriate RBN networks obtained by the Multiagent system for three representative values of minimum error is shown in Table 1. In figure 3 the evolution of the mean square error for the 10 architectures of RBN network is also displayed. Table 1: RBN networks obtained by the MA System for the Hermite polynomial MINIMUM ERROR OPTIMAL RBN NETWORK BY MA SYSTEM 0.001 20 hidden neurons 0.0007 20 hidden neurons 0.0002 35 hidden neurons

In figure 3 is observed that the best topology of RBN network is composed by 20 hidden neurons or 35 hidden neurons. The Multiagent system has also provided those architectures as the best. In addition, the Multiagent system requires less learning cycles to find it than a simultaneous training process, as it is shown in table 2. In this case, the advantage of the Multiagent system in terms of learning cycles is more significant when the minimum error required is reduced. 0.002 0.0018 0.0016 0.0014 0.0012 0.001 0.0008 0.0006 0.0004 0.0002 0 45N 25N 30N 20N 40N 35N 10N 15N 1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97 cycles Fig. 3: Evolution of the mean square error for different RBN network topologies: Hermite polynomial. Table 2: Multiagent System vs. Simultaneous for the Hermite polynomial Hermite polynomial Number of cycles Minimum error Simultaneous Training 0.001000 58 47 0.000950 58 47 0.000900 58 47 0.000850 58 47 0.000800 58 47 0.000750 67 48 0.000700 67 48 0.000650 67 48 0.000600 67 48 0.000550 76 49 0.000500 76 49 0.000450 76 49 0.000400 85 50 0.000350 94 51 0.000300 103 52 0.000250 157 58 0.000200 276 198 Multiagent system 4.2 A piecewise-defined function The Multiagent strategy has been also applied to the approximation of the single variable piecewise-defined function. This function is given by the following equation: 2.186x 12.864 si 10 x < 2 f(x) = 4.246x si 2 x < 0 ( 0.05x 0.5) 10e sin[ (0.03x + 0.7)x] si 0 x 10 The training set is composed by 120 points randomly generated by an uniform distribution in the interval [0,1]. The number of hidden neurons for the candidate RBN networks had to be increased in order to get an acceptable minimum error. Thus, 14 RBN agents have been incorporated to the Multiagent system, in which the topology of the networks is fixed to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70 and 75 hidden neurons, respectively. The optimal topologies obtained by the Multiagent system for the different minimum errors are shown in Table 3. Table 3: RBN networks obtained by the MA System for the piecewise-defined function MINIMUM ERROR OPTIMAL RBN NETWORK BY MA SYSTEM 0.001 55 hidden neurons 0.0007 55 hidden neurons 0.00035 55 hidden neurons As it is observed in figure 4, the best architecture is composed by 55 hidden neurons for each minimum error. The Multiagent system is able to find these architecture. 0.002 0.0018 0.0016 0.0014 0.0012 0.001 0.0008 0.0006 0.0004 0.0002 0 45N 25N 30N 20N 40N 35N 10N 15N 1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97 cycles Fig. 4: Evolution of the mean square error for different RBN network topologies: Piecewise-defined function

In addition, the number of learning cycles required by the Multiagent system is smaller than if a simultaneous training process is used, as it is observed in Table 4. In that table, the number of cycles needed by the Multiagent system and the simultaneous training for different minimum errors are presented. For the piecewise-defined function, the difference of leaning cycles for both methods is more significantly than in previous experimental cases when the minimum error is more restrictive. Table 4: Multiagent System vs. Simultaneous for the piecewise-defined function Piecewise-defined function Number of cycles Minimum error Simultaneous Training 0.001000 236 208 0.000950 236 208 0.000900 251 209 0.000850 251 209 0.000800 266 210 0.000750 266 210 0.000700 281 211 0.000650 296 212 0.000600 311 213 0.000550 326 214 0.000500 341 215 0.000450 371 217 0.000400 426 221 0.000350 503 228 Multiagent system 5 Conclusion The design of optimal RBN networks for a given problem is actually based on the experience of the developer. This work proposes the use of different RBN networks and the cooperation among them in a Multiagent System in order to automatically detect the best suited RBN network. The Multiagent System proposed is composed by n+m agents: n RBN agents and m manager agents. Each manager agent controls the evolution of the solution for each problem. In this way, the proposed architecture allows to train several problems ( m, one for each manager agent) at the same time, optimizing the use of the RBN agents. The protocol used in this MAS follows the KQML one, adding the initial information that manager agent sends to RBN agents in order to define exactly the problems. This architecture and the communication protocol developed has been tested in different experiments. The results have demonstrated the validity of the proposed system compared with an exhaustive search of the optimal RBN network. References [1] Moody J.E. and Darken C.J.: Fast Learning in Networks of Locally-Tuned Processing Units. Neural Computation 1, 281-294, 1989. [2] Poggio T. and Girosi F.: Networks for approximation and learning. Proceedings of the IEEE, 78, 1481-1497, 1990. [3] Galván I.M., Zaldivar J.M., Hernández H., Molga E.: The Use of Neural Networks for Fitting Complex Kinetic Data. Computer Chem. Engng. Vol. 20, No 12, pp. 1451-1465. 1996. [4] S. Harp, Samad T. and Guha A. Designing Application-Specific Neural Networks using the Genetic Algorithm, Advances in Neural Information Processing Systems, vol2, 447-454, 1990. [5] F. Gruau. Automatic Definition of Modular Neural Networks. Adaptive Behaviour, vol. 2, 3, 151-183, 1995 [6] D.B. Fogel, Fogel L.J. and Porto V.W. Evolving Neural Network, Biological Cybernetics, 63, 487-493, 1990. [7] Gruau F.: Genetic synthesis of modular neural networks. Proceedings of the 5 th International Conference on Genetic Algorithms, San Mateo CA, 318-325, 1995. [8] Whitehead B.A. and Choate T.D.: Evolving Space- Filling Curves to Distribute Radial Basis Functions Over an Input Space. IEEE Transactions on Neural Networks 5,1, 15-23, 1994. [9] Whitehead B.A. and Choate T.D.: Cooperative- Competitive Genetic Evolution of Radial Basis Function Centers and Widths for Time Series Prediction. IEEE Transactions on Neural Networks 7, 4, 869-880, 1996. [10] Lux A., Steiner D.: Understanding Cooperation: an Agent s Perspective. Proc. International Conference on Multiagent Systems, ICMAS-95, AAAI Press. San Francisco (CA), 1995. [11] Molina J. M., Jiménez F. J., Casar J. R.: Cooperative Management of Netted Surveillance Sensors. IEEE International Conference on Systems, Man and Cybernetics, pp 845-850. Orlando, EEUU. 1997. [12] Wesson R. et al: Network Structures for Distributed Situation Assessment, Readings in Distributed Artificial Intelligence, Ed. Alan H. Bond and Les Gasser, Morgan Kaufman 1988.