ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

Similar documents
A NEW METHODOLOGY FOR EMERGENT SYSTEM IDENTIFICATION USING PARTICLE SWARM OPTIMIZATION (PSO) AND THE GROUP METHOD OF DATA HANDLING (GMDH)

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Tracking Changing Extrema with Particle Swarm Optimizer

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

Optimization Using Particle Swarms with Near Neighbor Interactions

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

A HYBRID ALGORITHM BASED ON PARTICLE SWARM OPTIMIZATION

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Particle Swarm Optimization

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

Modified Particle Swarm Optimization

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Finite element model selection using Particle Swarm Optimization

Genetic-PSO Fuzzy Data Mining With Divide and Conquer Strategy

Advances in Military Technology Vol. 11, No. 1, June Influence of State Space Topology on Parameter Identification Based on PSO Method

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm

Constrained Single-Objective Optimization Using Particle Swarm Optimization

Particle Swarm Optimization

LECTURE 16: SWARM INTELLIGENCE 2 / PARTICLE SWARM OPTIMIZATION 2

Constraints in Particle Swarm Optimization of Hidden Markov Models

Model Parameter Estimation

PARTICLE SWARM OPTIMIZATION (PSO)

Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

What Makes A Successful Society?

Binary Differential Evolution Strategies

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Adaptive Multimodal Biometric Fusion Algorithm Using Particle Swarm

Null Steering and Multi-beams Design by Complex Weight of antennas Array with the use of APSO-GA

An Optimization of Association Rule Mining Algorithm using Weighted Quantum behaved PSO

A RANDOM SYNCHRONOUS-ASYNCHRONOUS PARTICLE SWARM OPTIMIZATION ALGORITHM WITH A NEW ITERATION STRATEGY

Small World Particle Swarm Optimizer for Global Optimization Problems

Optimal Power Flow Using Particle Swarm Optimization

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

Optimization of Micro Strip Array Antennas Using Hybrid Particle Swarm Optimizer with Breeding and Subpopulation for Maximum Side-Lobe Reduction

FITTING PIECEWISE LINEAR FUNCTIONS USING PARTICLE SWARM OPTIMIZATION

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

Improving Tree-Based Classification Rules Using a Particle Swarm Optimization

Designing of Optimized Combinational Circuits Using Particle Swarm Optimization Algorithm

Particle Swarm Optimization to Solve Optimization Problems

Effects of Weight Approximation Methods on Performance of Digital Beamforming Using Least Mean Squares Algorithm

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic

Simulating Particle Swarm Optimization Algorithm to Estimate Likelihood Function of ARMA(1, 1) Model

SwarmOps for Java. Numeric & Heuristic Optimization Source-Code Library for Java The Manual Revision 1.0

Exploratory Toolkit for Evolutionary and Swarm-Based Optimization

Particle swarm optimization for mobile network design

A Comparative Analysis on the Performance of Particle Swarm Optimization and Artificial Immune Systems for Mathematical Test Functions.

A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization

Particle Swarm Optimization Approach for Scheduling of Flexible Job Shops

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

Adaptive Multimodal Biometric Fusion Algorithm Using Particle Swarm

THREE PHASE FAULT DIAGNOSIS BASED ON RBF NEURAL NETWORK OPTIMIZED BY PSO ALGORITHM

Population Structure and Particle Swarm Performance

Evolutionary Algorithms

An Approach to Polygonal Approximation of Digital CurvesBasedonDiscreteParticleSwarmAlgorithm

Small World Network Based Dynamic Topology for Particle Swarm Optimization

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

Hybrid PSO-SA algorithm for training a Neural Network for Classification

HPSOM: A HYBRID PARTICLE SWARM OPTIMIZATION ALGORITHM WITH GENETIC MUTATION. Received February 2012; revised June 2012

Overlapping Swarm Intelligence for Training Artificial Neural Networks

A Native Approach to Cell to Switch Assignment Using Firefly Algorithm

SwarmOps for C# Numeric & Heuristic Optimization Source-Code Library for C# The Manual Revision 3.0

GRID SCHEDULING USING ENHANCED PSO ALGORITHM

Grid Scheduling using PSO with Naive Crossover

Discrete Particle Swarm Optimization for TSP based on Neighborhood

Learning Adaptive Parameters with Restricted Genetic Optimization Method

A Combinatorial Algorithm for The Cardinality Constrained Portfolio Optimization Problem

Novel Initialisation and Updating Mechanisms in PSO for Feature Selection in Classification

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

SwarmOps for Matlab. Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0

An Island Based Hybrid Evolutionary Algorithm for Optimization

Paper ID: 607 ABSTRACT 1. INTRODUCTION. Particle Field Optimization: A New Paradigm for Swarm Intelligence

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Particle Swarm Optimization applied to Pattern Recognition

Optimized Algorithm for Particle Swarm Optimization

A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY

Speculative Evaluation in Particle Swarm Optimization

Week 9 Computational Intelligence: Particle Swarm Optimization

Effect of the PSO Topologies on the Performance of the PSO-ELM

Variable Neighborhood Particle Swarm Optimization for Multi-objective Flexible Job-Shop Scheduling Problems

Discrete Multi-Valued Particle Swarm Optimization

Feature Selection Algorithm with Discretization and PSO Search Methods for Continuous Attributes

Robust Descriptive Statistics Based PSO Algorithm for Image Segmentation

SWITCHES ALLOCATION IN DISTRIBUTION NETWORK USING PARTICLE SWARM OPTIMIZATION BASED ON FUZZY EXPERT SYSTEM

CHAPTER 4 AN OPTIMIZED K-MEANS CLUSTERING TECHNIQUE USING BAT ALGORITHM

Particle Swarm Optimization For N-Queens Problem

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

Adaptive Radiation Pattern Optimization for Antenna Arrays by Phase Perturbations using Particle Swarm Optimization

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization

Feature weighting using particle swarm optimization for learning vector quantization classifier

Artificial bee colony algorithm with multiple onlookers for constrained optimization problems

Transcription:

Copyright 2002 IFAC 5th Triennial World Congress, Barcelona, Spain ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA Mark S. Voss a b and Xin Feng a Department of Civil and Environmental Engineering voss@rocketmail.com b Department of Electrical and Computer Engineering fengx@marquette.edu Marquette University, Milwaukee Wisconsin Abstract: This paper presents a new method for determining ARMA model parameters using Particle Swarm Optimization (PSO). PSO is a new optimization method that is based on a social-psychological metaphor. Each ARMA model is represented as a particle in the particle swarm. Particles in a swarm move in discrete steps based on their current velocity, memory of where they found their personal best fitness value, and a desire to move toward where the best fitness value that was found so far by all of the particles during a previous iteration. PSO is applied for determining the ARMA parameters for the Wolfer Sunspot Data. The method is extended using Akaike's Information Criterion (AIC). PSO is used to simultaneously optimize and select an estimated "best approximating ARMA model" based on AIC. Several plots are included to illustrate how the method converges for various PSO parameter settings. Keywords: ARMA Models, ARMA Parameter Estimation, Identification Algorithms, Agents, Genetic Algorithms, Model Approximation. Particle Swarm Optimization.. INTRODUCTION System Identification (Ljung, 999; Pandit, 983; Söderström, 989) is concerned with the derivation of mathematical models from experimental data. When given a data set one typically applies a set of candidate models and chooses one of the models based on a set of rules by which the models can be assessed. One of the simplest System Identification models is the Autoregressive Moving Average (ARMA) model as shown in equation (). yt + φyt + φ2yt 2 + + φnyt n = θ + θ a + θ a + + θ a 0 t 2 t 2 m t m () where φ i, i =, n and θ j, j =, m are the parameters for an ARMA(n,m) model. For a given ARMA(n,m) model the model parameters φi and θ j are selected such that equation (2) is minimized, where, N ( ) 2 y (2) t _ data yt _ ARMA t= N = number of data points. (3)

2. PARTICLE SWARM OPTIMIZATION (PSO) The Particle Swarm Algorithm is an adaptive algorithm based on a social-psychological metaphor (Kennedy and Eberhart, 200a). A population of individuals adapt by returning stochastically towards previously successful regions in the search space, and are influenced by the successes of their topological neighbors. Most particle swarms are based on two sociometric principles. Particles fly through the solution space and are influenced by both the best particle in the particle population and the best solution that a current particle has discovered so far. The best particle in the population is typically denoted by (global best), while the best position that has been visited by the current particle is denoted by (local best). The (global best) individual conceptually connects all members of the population to one another. That is, each particle is influenced by the very best performance of any member in the entire population. The (local best) individual is conceptually seen as the ability for particles to remember past personal successes. Particle Swarm Optimization is a relatively new addition to the evolutionary computation methodology, but the performance of PSO has been shown to be competitive with more mature methodologies (Eberhart and Shi, 998a; Kennedy and Spears, 998b). Since it is relatively straightforward to extend PSO by attaching mechanisms employed by other evolutionary computation methods that increase their performance; PSO has the potential to become an excellent framework for building custom highperformance stochastic optimizers (Løvbjerg, et al., 200). It is interesting to note that PSO can be considered as a form of continuous valued Cellular Automata. This allows its hybridizations to extend into areas other than computational intelligence (Kennedy and Eberhart, 200a). The best previous position so far achieved by any of the particles (the position giving the best fitness value) of the i th particle is recorded and represented as, P = ( p, p,..., p ) (7) G g g2 gd On each iteration the velocity for each dimension of each particle is updated by, v w (8) ik = kvik + cϕpik + c2ϕ2 pgk k, g,2,..., D where w k is the inertia weight that typically ranges from 0.9 to.2. c and c2 are constant values typically in the range of 2 to 4. These constants are multiplied by ϕ (a uniform random number between 0 and ) and a measure of how far the particle is from its personal best and the global best particle so far. From a social point of view, the particle moves based on its current direction ( ), its memory of where it found its personal best ( w k p ik { } { } the best particle in the population ( ). 2.2 PSO - Position Update Rule ), and a desire to be like p gk After a new velocity for each particle is calculated, each particle's position is updated according to: xik = xik + vik (9) It typically takes a particle swarm a few hundred to a few thousand updates for convergence depending on the parameter selections within the PSO algorithm (Eberhart and Shi, 998b). 2.3 Particle Swarm Parameter Settings 2. PSO Equations The ith particle is represented as, X = ( x, x,..., x ) I i i2 id (4) Particle Swarm Optimization was used to determine the ARMA parameters for the Wolfer Sunspot Data (770-869). The PSO parameter values that were used are given in Tables. and 2. where D is the dimensionality of the problem. The rate of the position change (velocity) of the i th particle is represented by, V = ( v, v,..., v ) (5) I i i2 id where v ik is the velocity for dimension k for particle i. The best previous position (the position th giving the best fitness value) of the i particle is represented as, P = ( p, p,..., p ) I i i2 id (6) Table. Particle Swarm Parameter Settings Parameter Setting Population Size 80 Number of 000 Iterations c and c 2 2.0 Inertial Weight 0.7 Figs.., 2. and 3. Inertial Weight 0.7 to 0.9 Figs. 4.

Dim Table 2. Particle Variable Settings Variables n 0 3 2 m 0 0 3 3-25 25 5 θ 0 φ φ 2 4-25 5-25 φ 0 θ θ 2 23-25 θ 0 xmin xmax 25 5 Table 2. illustrates data structure for a particular particle. Note that a each variable has a minimum and maximum "x" value along with a maximum absolute velocity. Having separate limits for each particle dimension is an extension to the simple PSO. Dimension is used for the regressive order which has been set to be between and 0 inclusive. Similarly Dimension 2 is used for the order of the moving average order. That is each particle represents an ARMA(n,m) model where n and m can assume values through 0 and 0 through 0 respectively. The values that the ARMA parameters φ and θ can assume have also been limited to ±25 with a maximum absolute velocity of 5. It is apparent that having expert knowledge about specific variable domains when setting up the particle variable settings can increase the performance of the algorithm. Setting the PSO particle variable settings is analogous to the procedure for setting up variable representations within a binary genetic algorithm; where one needs to choose the interval and a string length for each variable (Angenline, 998). 2.4 Results: PSO + ARMA = SwARMA vmax 25 5 25 5 3-25 25 5 4-25 25 5 5-25 25 5 Initially "n" and "m" were fixed to test the ability of PSO to determine the ARMA parameters for a fixed ARMA model. The results from the study are shown in Fig.. This model was chosen to demonstrate the use of PSO on a well understood System Identification problem. The authors where somewhat surprised at the results. The PSO method converged in less that a minute on a 200 Mhz Pentium PC. In all cases the solution found was substantially better that those found using the Wolfer Sunspot example provided with the IMSL Libraries. It is recognized that the IMSL Libraries may not be "optimal" and are used here to provide a benchmark value. 70 50 30 0 90 70 50 30 0-0 Wolfer Sunspot Data (770-869) 9 7 25 33 4 49 57 65 73 8 89 Sample Data SwARMA(2,) LSE = 20,94 IMSL ARMA(2,) LSE = 24,549 SwARMA(4,2) LSE = 20,857 Fig.. Wolfer Sunspot Data. Fixed SwARMA models. Weight = 0.7. 97 For an ARMA(2,) model the PSO solution was 5% better than the IMSL solution. In light of these results the authors chose to call the combination of PSO with ARMA: SwARMA (Voss and Feng, 200). These results suggest some interesting future research with regards to traditional System Identification. 2.5 Results: Static SwARMA convergence The convergence of the static ARMA(2,) and ARMA(4,2) models, using the particle variable settings given in Table 2., are shown in Fig. 2. The dependant variable is the sum squared error SSE. This is also the fitness of the best particle in the population. The independent variable is show is the number of fitness function evaluations. This can be calculated as the number of particle iterations times the population size. Plotting the number of function evaluations is more informative than plotting the iterations. The fixed ARMA(2,) model converges faster that the fixed ARMA(4,2) model as expected. Most of the fitness improvement is achieved after only 000 function evaluations. The plateau after 000 iterations suggests that the algorithm might have a more optimal parameter configuration.

Convergence Plot Wolfer Sunspot Data (770-869) 70 Wolfer Sunspot Data (770-869) 69 50 59 30 SSE (Thousands) 49 39 29 0 90 70 50 30 9 0 2000 4000 6000 8000 Function Evaluations 0-0 9 7 25 33 4 49 57 65 73 8 89 97 SwARMA(2,) LSE = 20,94 SwARMA(4,2) LSE = 2,490 Sample Data SwARMA(8,3) LSE = 4,969 Fig. 2. Convergence plot for fixed SwARMA models. Weight = 0.7. Fig. 4. Wolfer Sunspot Data. Variable SwARMA model. Weight = 0.7. SSE (Thousands) 26 24 22 20 8 6 4 Convergence Plot Wolfer Sunspot Data (770-869) 0 20000 40000 60000 Function Evaluations SwARMA(2,) LSE = 2,490 SwARMA(4,2) LSE = 20,857 SwARMA(8,3) LSE = 4,969 The best model after 60,000 function evaluations (750 iterations) was an ARMA(8,3) model. This model was plotted against the fixed ARMA models discussed in Fig. 2.. The constant improvement is caused by the model simultaneously optimizing, while at the same time, increasing the number of ARMA parameters. The observation of this behaviour is necessary in order to extend the SwARMA model to incorporate Akaike's Information Criterion (AIC) (Burnham and Anderson, 998 ). into the model selection process. AIC will be discussed in the next section. The ARMA (8,3) model is plotted in Fig. 4. This model is somewhat large ( 8+3 = parameters ) for the 00 data points used. The next section discusses selecting models using a constraint that takes both model fit and model complexity into consideration. 3. AKAIKE'S INFORMATION CRITERION (AIC) Akaike's Information Criterion is an informationtheoretic approach for selecting the estimated best approximating ARMA model. Fig 3. Convergence plot for variable SwARMA model. Weight = 0.7. 2.6 Results: Dynamic SwARMA convergence AIC can be expressed as, where, ˆ 2 AIC = n log( σ ) + 2K (0) Fig. 3. illustrates the effect of allowing dimensions "" and "2" (which correspond to n and m in an ARMA(n,m) model) participate in the Particle Swarm Algorithm. 2 t= ˆ σ = N ( y ) 2 t_ data yt_ ARMA N, ()

where, K = n+ m, sum of the ARMA(m,n) (2) model parameters. 3. AIC- A Heuristic Interpretation AIC is sometimes interpreted as the sum of two terms; the first is a measure of model fit, while the second is a penalty for the number of model parameters. This is an acceptable heuristic interpretation as long as one does not forget that the second term is not an arbitrary penalty term, but rather it is derived based on a link between information theory and the Kullback-Liebler (K-L) distance (Burnham, and Anderson, 998). 3.2 Results: Dynamic ARMA convergence AIC was easily integrated into the SwARMA methodology. This was accomplished by simply replacing the particle fitness function (equation 2) with equation 0. It was demonstrated in Fig. 3. that the variable SwARMA model became more complex with increasing iterations. Fig. 5 demonstrates the ability for AIC to restrict model complexity. Fig. 5 also demonstrates the advantageous use of dynamic inertial weights. For the variable SwARMA(2,) model the particles inertial weight was linearly decreased from 0.9 to 0.7 during the run. The effect of this was to delay convergence during the beginning of the run. That is, using a higher initial inertial weight prevented the swarm from prematurely converging on a suboptimal solution. This can be observed as the shift to the right exhibited by the SwARMA(2,) run. Fig. 5. illustrated the importance of understanding the behavior of particles (Kennedy, 998c) based on the parameter settings. 4. CONCLUSION Particle Swarm Optimization (PSO) was demonstrated as a viable method for determining the ARMA model parameters for a fixed size ARMA model. The PSO methodology was extended to solve variable ARMA models based on a sum squared error criteria (SSE). The variable ARMA model demonstrated the ability to use PSO to dynamically select the model size while decreasing the (SSE). Akaike's Information Criterion (AIC) was then integrated into the variable SwARMA model to allow for ARMA model selection based on an information-theoretic approach (SwARMA-AIC). Convergence plots for SwARMA-AIC demonstrated the ability for this hybrid algorithm to use AIC to select an estimated best approximating ARMA model. Particle Swarm Optimization has been shown to be a potential tool for optimising ARMA models. AIC Convergence Plot Wolfer Sunspot Data (770-869) 585 575 565 555 545 535 0 20000 40000 60000 Function Evaluations SwARMA(,2), LSE = 22,777, W=0.9 SwARMA(,2), LSE = 22,777, W=0.7 SwARMA(2,), LSE = 20,95, W=0.9 to 0.7 Fig. 5. Convergence plot for variable SwARMA-AIC models. REFERENCES Angenline, P.J., (998). Evolutionary Optimization Versus Particle Swarm Optimization: Philosophy and Performance Differences, Evolutionary Programming VII, Porto, V.W., Saravanan, N. Waagen, D., Eiben, A.E., 447, 6-68, Springer. Burnham, K.P., Anderson, D.R., (998). Model Selection and Inference, Springer-Verlag, New York, NY. Eberhart, R.C., Shi, Y., (998a). Comparison between Genetic Algorithms and Particle Swarm Optimization, Evolutionary Programming VII, Porto, V.W., Saravanan, N. Waagen, D., Eiben, A.E., 447, 6-68, Springer-Verlag. Eberhart, R.C., Shi, Y., (998b). Parameter Selection in Particle Swarm Optimization, Evolutionary Programming VII, Porto, V.W., Saravanan, N. Waagen, D., Eiben, A.E., 447, 59-600, Springer- Verlag. Kennedy, J., Eberhart, R.C., (Contributor), Shi, Y., (200a). Swarm Intelligence, Morgan Kaufmann, San Francisco, CA. Kennedy, J., and Spears, W.M., (998b). Matching Algorithms to Problems: An Experimental Test of the Particle Swarm and Some Genetic Algorithms on the Multimodal Problem Generator, Proceedings of the 998 International Conference on Evolutionary Computation, pp. 78-83.

Kennedy, J., (998c). The behavior of particles, Evolutionary Programming VII, Porto, V.W., Saravanan, N. Waagen, D., Eiben, A.E., 447, 58-590, Springer-Verlag. Løvbjerg, M., Rasmussen, T.K., Krink, T., (998). Hybrid Particle Swarm Optimizer with Breeding and Subpopulations, Proceedings Genetic and Evolutionary Computation Conference, GECCO- 200, Morgan Kaufmann, San Francisco, CA. Ljung, L., (999). System Identification - Theory for the User, Prentice Hall. Pandit, S.M., Wu, S., (983) Time Series and System Analysis with Applications, John Wiley & Sons. Söderström, T., Stoica, P., (989). System Identification, Prentice Hall. Voss, M.S., Feng, X., (200). Emergent System Identification using Particle Swarm Optimization, Proceedings Complex Adaptive Structures Conference, SPIE, Bellingham, WA.