Evolutionary Complex Neural Networks

Similar documents
Evolving Complex Neural Networks

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

(Social) Networks Analysis III. Prof. Dr. Daning Hu Department of Informatics University of Zurich

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Complex Networks. Structure and Dynamics

Using CODEQ to Train Feed-forward Neural Networks

Feature weighting using particle swarm optimization for learning vector quantization classifier

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

Exploiting the Scale-free Structure of the WWW

Properties of Biological Networks

Evolving SQL Queries for Data Mining

Response Network Emerging from Simple Perturbation

arxiv:cond-mat/ v1 21 Oct 1999

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Summary: What We Have Learned So Far

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Evolutionary Algorithms. CS Evolutionary Algorithms 1

Using a genetic algorithm for editing k-nearest neighbor classifiers

Neural Network Weight Selection Using Genetic Algorithms

Improving Tree-Based Classification Rules Using a Particle Swarm Optimization

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

Performance Analysis of Data Mining Classification Techniques

NEURO-PREDICTIVE CONTROL DESIGN BASED ON GENETIC ALGORITHMS

Research on Applications of Data Mining in Electronic Commerce. Xiuping YANG 1, a

Genetic Programming for Data Classification: Partitioning the Search Space

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

Automatic Programming with Ant Colony Optimization

An Evolving Network Model With Local-World Structure

Smallest small-world network

Wednesday, March 8, Complex Networks. Presenter: Jirakhom Ruttanavakul. CS 790R, University of Nevada, Reno

Efficient Pruning Method for Ensemble Self-Generating Neural Networks

Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees

1 Lab + Hwk 5: Particle Swarm Optimization

Using Genetic Algorithm with Triple Crossover to Solve Travelling Salesman Problem

Critical Phenomena in Complex Networks

An experimental evaluation of a parallel genetic algorithm using MPI

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

The Complex Network Phenomena. and Their Origin

1 Lab 5: Particle Swarm Optimization

Why Do Computer Viruses Survive In The Internet?

Evolutionary Optimization of Neural Networks for Face Detection

Example for calculation of clustering coefficient Node N 1 has 8 neighbors (red arrows) There are 12 connectivities among neighbors (blue arrows)

Designing Radial Basis Neural Networks using a Distributed Architecture

Simulation of Back Propagation Neural Network for Iris Flower Classification


Small World Properties Generated by a New Algorithm Under Same Degree of All Nodes

Comparative Analysis of Swarm Intelligence Techniques for Data Classification

Neuro-fuzzy, GA-Fuzzy, Neural-Fuzzy-GA: A Data Mining Technique for Optimization

A Parallel Evolutionary Algorithm for Discovery of Decision Rules

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

PARALLEL PARTICLE SWARM OPTIMIZATION IN DATA CLUSTERING

LEARNING WEIGHTS OF FUZZY RULES BY USING GRAVITATIONAL SEARCH ALGORITHM

Topology Enhancement in Wireless Multihop Networks: A Top-down Approach

Using Genetic Algorithms in Integer Programming for Decision Support

Feature Selection Algorithm with Discretization and PSO Search Methods for Continuous Attributes

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION

CloNI: clustering of JN -interval discretization

GA is the most popular population based heuristic algorithm since it was developed by Holland in 1975 [1]. This algorithm runs faster and requires les

Structural Analysis of Paper Citation and Co-Authorship Networks using Network Analysis Techniques

Seismic regionalization based on an artificial neural network

APPLICATIONS OF INTELLIGENT HYBRID SYSTEMS IN MATLAB

Evolutionary form design: the application of genetic algorithmic techniques to computer-aided product design

Approach Using Genetic Algorithm for Intrusion Detection System

Artificial Neural Network Evolutionary Algorithm (ANNEVA) Abstract

1 Lab + Hwk 5: Particle Swarm Optimization

Hybrid Particle Swarm and Neural Network Approach for Streamflow Forecasting

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

Towardsunderstanding: Astudy ofthe SourceForge.net community using modeling and simulation

Transactions on Information and Communications Technologies vol 15, 1997 WIT Press, ISSN

A Study on Optimization Algorithms for Clustering Gene Expression Data

The Establishment Game. Motivation

Kyrre Glette INF3490 Evolvable Hardware Cartesian Genetic Programming

CS249: SPECIAL TOPICS MINING INFORMATION/SOCIAL NETWORKS

Topology and Dynamics of Complex Networks

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms

Using Decision Boundary to Analyze Classifiers

Universal Behavior of Load Distribution in Scale-free Networks

A Lazy Approach for Machine Learning Algorithms

CONCEPT FORMATION AND DECISION TREE INDUCTION USING THE GENETIC PROGRAMMING PARADIGM

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?)

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

Binary Differential Evolution Strategies

Index Terms PSO, parallel computing, clustering, multiprocessor.

Small World Network Based Dynamic Topology for Particle Swarm Optimization

WEIGHTED K NEAREST NEIGHBOR CLASSIFICATION ON FEATURE PROJECTIONS 1

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

IN recent years, neural networks have attracted considerable attention

Ant Colony Optimization for dynamic Traveling Salesman Problems

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

V.Petridis, S. Kazarlis and A. Papaikonomou

A Generalized Feedforward Neural Network Architecture and Its Training Using Two Stochastic Search Methods

Effect of the PSO Topologies on the Performance of the PSO-ELM

Character Recognition Using Convolutional Neural Networks

Nick Hamilton Institute for Molecular Bioscience. Essential Graph Theory for Biologists. Image: Matt Moores, The Visible Cell

Network Thinking. Complexity: A Guided Tour, Chapters 15-16

A New Crossover Technique for Cartesian Genetic Programming

Transcription:

Evolutionary Complex Neural Networks Mauro Annunziato, Ilaria Bertini, Matteo De Felice, Stefano Pizzuti ENEA Energy, New technologies and Environment Agency Casaccia R.C. - Via Anguillarese 301, 00123 Rome, Italy Phone: +39-0630484411, Fax: +39-0630484811 email:{mauro.annunziato, ilaria.bertini, matteo.defelice, stefano.pizzuti}@casaccia.enea.it ABSTRACT: Complex networks, like the scale-free model, are observed in many biological and social systems and the application of this topology to artificial neural networks (ANN) leads to interesting considerations. In this paper, we present a preliminary study on the modelling capabilities of ANN with complex topologies. We used an evolutionary algorithm (EA) to train them providing thus the paradigm of Evolutionary Complex Neural Networks (ECNN). We compared the ECNN performances to some well known techniques, including simple feed-forward evolutionary and Back Propagation trained neural networks, on several well established benchmarks and experimentation show promising results. KEYWORDS: complex networks, evolutionary neural networks, artificial life INTRODUCTION Artificial Neural Networks (ANN) and Evolutionary Algorithms (EA) are both abstractions of natural processes. They are formulated into a computational model so that the learning power of neural networks and adaptive capabilities of evolutionary processes can be harnessed in an artificial life environment. Adaptive learning, as it is called, produces results that demonstrate how complex and purposeful behaviour can be induced in a system by randomly varying the topology and the rules governing the system. Evolutionary algorithms can help determine optimised neural network giving rise to a new branch of ANN known as Evolutionary Neural Networks[1] (ENN). It has been found [2] that, in most cases, the combinations of evolutionary algorithms and neural nets perform equally well (in terms of accuracy) and were as accurate as hand-designed neural networks trained with back-propagation [3]. However, some combinations of EAs and ANNs performed much better for some data than the hand-designed networks or other EA/ANN combinations. This suggests that in applications where accuracy is a premium, it might pay off to experiment with EA and ANN combinations. A new and very interesting research area which recently emerged is that of Complex Networks (CN). CN (mainly scalefree networks) are receiving great attention in the physics community, because they seem to be the basic structure of many natural and artificial networks like proteins, metabolism, species network, biological networks [4][5][6], the Internet, the WWW, the e-mail network, metabolic networks, trust network and many more [7]. In this context, using complex ANN topologies driven by evolutionary mechanisms is a new idea and we used them in order to model complex processes. THE METHODOLOGY In this context, the goal of the proposed work is the study of evolutionary neural networks with a directed-graph based topology, obtained using an iterative algorithm similar to that proposed by Barabasi-Albert in 1999 [8]. COMPLEX NETWORKS A unique definition of complex network doesn t exist, this term refers to networks with non-trivial topology and high number of nodes and connections. However, complex networks can be classified according to some topology descriptors, among these the most important ones are: the node degree distribution, the shortest path length and the clustering coefficient.

Properties of these networks are often compared with random graphs [9] that are to be considered simple networks. Random networks have a Poisson node degree distribution, a small shortest path length and a small clustering coefficient. Small World networks [4][10] have a Poisson node degree distribution, a small shortest path length and a high clustering coefficient. They are in the middle between regular and random networks (see Figure 1:) and it has been shown [4] that this topology is the optimal one for communication tasks. Figure 1: network topologies The scale-free model [11] has a node degree distribution which follows the power law distribution. It means that we have few nodes with high connectivity (hubs) and many nodes with few links (see Figure 2:). These networks show the so-called small-world property [4], every two nodes of the network are placed at a distance of a relative small number of edges. These types of networks are receiving great attention in the physics community, because many networks have been reported recently to follow a scale free degree distribution. Just as examples we can cite the Internet, the WWW, the e-mail network, metabolic networks, trust network and many more [7]. Their inspiring philosophy could be synthesized in the sentence the rich gets richer, because each node has a probability to get a new link that is proportional to the number of its current links. Figure 2: scale free network Therefore, in this study, we focussed on scale-free networks. THE ALGORITHM We used neural networks based on a complex topology created with the algorithm whose pseudo-code is shown in Table I:. The algorithm starts creating an initial set of nodes connected each other and then each added node is connected with a selected destination node with a non-linear preferential-attachment function: this function defines the probability that a node in network receive a link from a newly inserted node [8][12]. The analytic form of such a function is: α k (1) i Π ( ki ) = α k j j

Where k i is the degree of node i. This function is monotonically increasing, the α parameter influences the numbers of dominant hubs with high connectivity. In Figure 3: and Figure 4: we show the node degree distributions of networks γ with 4000 nodes built with the presented algorithm with different α values, fitting with a power function like k. BEGIN /* Initial set of nodes */ FOR i = 1 to m 0 ADD node i CONNECT node i to ALL /* Add nodes and connect them with PA function */ FOR i = 1 to TOTAL_NODES ADD node i FOR j = 1 to m x = GET_NODE_WITH_PREFERENTIAL_ATTACHMENT CONNECT node i to node x CONNECT node x to node i /* Select output node */ x = RANDOM(TOTAL_NODES) END OUTPUT_NODE = node x /* Add and connect input nodes */ CASE INPUT_CONNECTION_TYPE OF: /* CONNECTION TYPE A */ A: ADD ALL_INPUT_NODES FOR i = 1 to m x = RANDOM(TOTAL_NODES) CONNECT ALL_INPUT_NODES to node x /* CONNECTION TYPE B */ B: FOR i = 1 to TOTAL_INPUT_NODES ADD input_node i FOR j = 1 to m x = GET_NODE_WITH_PREFERENTIAL_ATTACHMENT CONNECT input_node i to node x END CASE Table I:. algorithm pseudo-code Here the parameters m 0 and m respectively represent the number of initial nodes and the number of links each time a new node is inserted. In the following table we report the value of the parameters we used. The output node of the network is randomly chosen after the insertion of the nodes of the hidden layer. At the selection of the output node, input nodes are inserted. Parameter Value m 0 4 m 2-6 α 1.2 Table II:. algorithm parameters

Figure 3: degree distribution of a 4000 nodes network created with α = 1 fitted with a power-law function with γ=3.1 (dashed line) Figure 4: degree distribution of a 4000 nodes network created with α = 1.2 fitted with a power-law function with γ=4 (dashed line) In this algorithm we considered two types of connections between the network and the input nodes (in the pseudo-code these methods are indicated by the variable INPUT_CONNECTION_TYPE). In case A we have all the input nodes connected to the same m nodes of the hidden layer. Otherwise in case B we have each input node linked to m random nodes of the hidden layer. A network created with this algorithm is presented in Figure 5:. Figure 5: an example of complex neural network. Input nodes are black and the output node is grey.

THE EVOLUTIONARY ENVIRONMENT The implemented evolutionary environment is an Artificial Life (ALIFE) environment [13]. This approach has been tested on the optimisation of static well known benchmarks, as the Travelling Salesman Problem, the Chua s circuit and the Kuramoto dynamical system [14], as well as real cases [15][16][17][18]. The ALIFE context is a two dimensional lattice (life space) representing a flat physical space where the artificial individuals can move around. In this experimentation we used a 25X25 lattice, corresponding to a maximum population size of 625 individuals, and an initial population of 215 individuals. At each iteration (or life cycle), individuals move in the life space and, in case of meeting with other individuals, interaction occurs. Each individual has a particular set of rules that determines its interactions with other agents basically based on a competition for energy in relation to the performance value. Individuals can self-reproduce via haploid mutation which occurs only if the individual has an energy greater than a specific birth energy. In fact, during reproduction, an amount of energy equal to the birth energy is transferred from the parent to the child. In the haploid reproduction a probabilistic test for self reproduction is performed at every life cycle and a probabilistic-random mutation occurs on the genes according to the mutation rate and the mutation amplitude, which are evolutionary themselves [19]. When two individuals meet, fight occurs. The winner is the individual characterized by a greater value of performance and the loser transfers an amount of energy (fighting energy) to the winner. At every life cycle each individual age is increased and when the age reaches a value close to the average lifetime, the probability of natural death increases. This ageing mechanism is very important to warrant the possibility to lose memory of very old solutions and follow the process evolution. Another mechanism of death occurs when an individual reaches the null energy due to reproduction or fighting with other individuals. For interested readers, a detailed description of the methodology is reported in [15][16]. The characteristic Artificial Life environment we used is called Artificial Societies, introduced in [20], but we made some modifications to the original one. In our implementation each individual of the initial population is initialised with a different network topology in his genotype. Network weights and activation functions, but not the topology, are subject to random mutations and no crossover mechanism among different network topologies has been implemented because in the artificial life algorithm we used there is no bi-sexual reproduction mechanism. EXPERIMENTATION Experimentation concerned the optimal training of complex neural networks in order to solve six classification benchmarks taken from the UCI repository [21]. The neural optimisation task has been accomplished using the proposed evolutionary approach and compared to the following six methodologies: Multilayer Perceptron (MLP) [22] trained with the Back-Propagation algorithm, Kstar [23], MultiBoost (MB) [24], Voting Feature Interval (VFI) [25], Particle Swarm Optimisation (PSO) [26] and evolutionary neural networks (ENN)[19]. For the first four the WEKA tool [27] has been used and for PSO we took the results presented in [28]. Each data set (see Table III:) has been split in two parts: training (75% of the whole data set) and testing set (25% of the whole data set). The neural feed-forward topologies of the ENN and MLP models are reported in Table IV:. Problem Data set size Training set Size Testing set Size Classes Input size Diabetes 768 576 192 2 8 Heart 303 227 76 2 35 Iris 150 112 38 3 4 Wdbc 569 426 143 2 30 WdbcInt 699 524 175 2 9 Wine 178 133 45 3 13 Table III:. Data sets features Problem Topology (input-hidden-output) Diabetes 8-3-2 Heart 35-4-2 Iris 4-4-3 Wdbc 30-4-2 WdbcIn 9-3-2 Wine 13-3-3 Table IV:. Neural feed-forward topologies

For the Evolutionary Complex Neural Networks (ECNN) 30 hidden neurons were used for all the problems. Finally, to avoid over-fitting in the MLP, ENN, ECNN tests, we used the early stopping criterion by setting 300000 performance evaluations. Results are in Table V:, where we show the average of the classification error (percentage) on the testing set. For each of the mentioned techniques, we performed ten runs on each problem. ECNN ENN MLP PSO KSTAR MB VFI Diabetes 19.06% 21.82% 21.9% 22.5 % 32.29% 26.5% 54.69% Heart 13.16% 14.87% 17.1% 17.46% 25.00% 10.5% 18.42% Iris 3.16% 2.63% 2.63% 2.63% 5.26% 7.90% 7.90% Wdbc 1.47% 1.75% 2.10% 5.73% 5.59% 2.80% 5.59% WdbcIn 2.06% 1.09% 1.71% 2.87% 1.14% 4.00% 1.71% Wine 4.22% 3.33% 2.22% 4.44% 2.22% 22.2% 11.11% Average 7.19% 7.58% 7.94% 9.27% 11.92% 12.32% 16.57% Table V:. Experimental testing results (classification error) These results show the effectiveness of the proposed methodology based on evolutionary complex neural networks (ECNN). In fact, this method provides the best global performance obtaining the best results on two problems. In particular, it is interesting the comparison with the feed-forward MLP, trained with the Back-Propagation Algorithm, and ENN. This comparison directly shows the performance improvement when using the suggested technique. In particular, this experimentation points out that the most remarkable improvement is achieved on the most difficult problems (diabetes, heart) suggesting the consideration that such complex models are worth using on complex problems, while for simple problems (iris, wdbcin, wine) simple architectures are better. As regards cpu time, KSTAR, MB and VFI are very fast (1-2 seconds) because they are statistical clustering techniques which do not require a training stage. For PSO we got the results from [28] which does not report such an information, for ECNN, because of the complexity of the topology, the average training time is about 50 minutes and for the other methods the training time ranges from 25 to 190 seconds. CONCLUSION Complex networks like the scale-free model proposed by Barabasi-Albert are observed in many biological systems and the application of this topologies to artificial neural networks leads to interesting considerations. In this paper, we presented a preliminary study on how to evolve neural networks with complex topologies and in particular we focused on the scale-free model. Experimentation has been carried out on several well known benchmark problems and we compared the proposed approach to six different methods, including simple feed-forward neural networks. The experimentation we did is to be considered only the beginning of the exploration of the union between neural networks and complex topologies. It is needed to perform more extended tests and to compare the structures object of this paper with more optimisation and modelling methods. However, such preliminary results showed that the proposed models outperformed on average all the others, achieving the most remarkable performance on difficult problems. Future work will focus on the topological analysis (like connectivity) of the results and on the evolution of complex topologies. REFERENCES [1] Yao, X., 1999 Evolving Artificial Neural Networks, Proceedings of the IEEE, 87(9): 1423-1447 [2] Alander, J. T., 1998 An indexed bibliography of genetic algorithms and neural networks, Technical Report 94-1- NN, University of Vaasa, Department of Information Technology and Production Economics [3] Cant-Paz, E. and Kamath, C., 2005 An empirical comparison of combinations of evolutionary algorithms and neural networks for classification problems, IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics 915-927

[4] Watts DJ, Strogatz SH., 1998 Collective dynamics of 'small-world' networks, Nature.393(6684):440-2 [5] Gully A. P. C. Burns, Malcolm P. Y Oung, 2000 Analysis of the connectional organization of neural systems associated with the hippocampus in rats, Philosophical Transactions of the Royal Society B: Biological Sciences, Volume 355, Issue 1393, Pages 55-70 [6] Victor M. Eguiluz, Dante R. Chialvo, Guillermo A. Cecchi, Marwan Baliki, and A. Vania Apkarian, 2005 Scale- Free Brain Functional Networks, Phys. Rev. Lett. 94, 018102 [7] Dorogotvsev, S. N., Mendes, J. F. F., 2002 Evolution of Networks, Advances in Physics, Vol. 51, n. 4, pp. 1079-1187 [8] Barabasi, A.-L. and Albert R., 1999 Emergence of scaling in random networks, Science, 286, 509 512 [9] P Erdos, A Renyi, 1959 On random graphs, Publ. Math. Debrecen [10] Watts, D. J., 1999 Small Worlds: The Dynamics of Networks Between Order and Randomness, Princeton University Press [11] Coven, R., Havlin, S., ben-avraham, D., 2002 Structural Properties of Scale-Free Networks, chap. 4 in "Handbook of graphs and networks", Eds. S. Bornholdt and H. G. Schuster, Wiley-VCH [12] Jeong H., Neda Z. and Barabasi A.-L., 2003 Measuring preferential attachment for evolving networks, Euro. Phys. Lett. 61 567 [13] Langton, C., 1989 : Artificial Life, Addison-Wesley, Redwood City/CA, USA [14] Annunziato M., Bertini I., Lucchetti M., Pannicelli A., Pizzuti S., 2001 : Adaptivity of Artificial Life Environment for On-Line Optimization of Evolving Dynamical Systems, in Proc. EUNITE01, Tenerife, Spain [15] Annunziato M., Bertini I., Pannicelli A., Pizzuti S., Tsimring L., 2000 : Complexity and Control of Combustion Processes in Industry, Proc. of CCSI 2000 Complexity and Complex System in Industry, Warwick, UK [16] Annunziato M., Lucchetti M., Orsini G., Pizzuti S., 2005 : Artificial life and on-line flows optimisation in energy networks, IEEE Swarm Intelligence Sympusium, Pasadena (CA), USA [17] Annunziato M., Bertini I., Pannicelli A. and Pizzuti S., 2006 : A Nature-inspired-Modeling-Optimization-Control system applied to a waste incinerator plant, 2nd European Symposium NiSIS 06, Puerto de la Cruz, Tenerife (Spain) [18] Annunziato M., Bertini I., Pannicelli A., Pizzuti S., 2006 : Evolutionary Control and On-Line Optimization of an MSWC Energy Process, Journal of Systemics, Cybernetics and Informatics, Vol.4, Num. 4 [19] Annunziato M., Bertini I., Iannone R.., Pizzuti S., 2006 Evolving feed-forward neural networks through evolutionary mutation parameters, 7 th International Conference of Intelligent Data Engineering and Automated Learning (IDEAL 06), Burgos, Spain, 554-561 [20] Annunziato M., Bruni C., Lucchetti M., Pizzuti S., 2003 : "Artificial life approach for continuous optimisation of non stationary dynamical systems", Integrated Computer-Aided Engineering, vol. 10, n.2, 111-125 [21] Blake, C. L and Merz, C. J., 1998 : UCI repository of machine learning databases, University of California, Irvine, http://www.ics.uci.edu/~mlearn/mlrepository.html [22] Rumelhart, D. E., Hinton, G. E., and Williams, R. J., 1986 : Learning representations by backpropagating errors. Nature, 323, 533 536 [23] Cleary, J. G. and Trigg, L. E., 1995 : K*: An Instance- based Learner Using an Entropic Distance Measure, Proceedings of the 12th International Conference on Machine learning, 108-114

[24] Webb, G. I., 2000 : MultiBoosting: a technique for combining boosting and wagging, Machine Learning, vol. 40 (2), 159-196 [25] Demiroz, G. and Guvenir, A., 1997 : Classification by voting feature intervals, ECML-97 [26] Kennedy, J. and Eberhart R.C., 1995 : Particle swarm optimization. Proc. IEEE International Conference on Neural Networks, IV. Piscataway, NJ: IEEE Service Center, 1942 1948 [27] Witten, I. H. and Frank, E., 2000: Data mining: practical machine learning tool and technique with Java implementation, San Francisco: Morgan Kaufmann [28] De Falco, I., Della Cioppa, A. and Tarantino, E., 2005 : Impiego della particle swarm optimization per la classificazione in database, II Italian Artificial Life Workshop, Rome, Italy, ISTC-CNR