Hybrid Particle Swarm and Neural Network Approach for Streamflow Forecasting

Similar documents
Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

Particle Swarm Optimization

Learning Fuzzy Cognitive Maps by a Hybrid Method Using Nonlinear Hebbian Learning and Extended Great Deluge Algorithm

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

Hybrid PSO-SA algorithm for training a Neural Network for Classification

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic

Tracking Changing Extrema with Particle Swarm Optimizer

Modified Particle Swarm Optimization

Automatic differentiation based for particle swarm optimization steepest descent direction

Image Compression: An Artificial Neural Network Approach

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

A hybrid particle swarm optimization back-propagation algorithm for feedforward neural network training

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

International Journal of Current Research and Modern Education (IJCRME) ISSN (Online): & Impact Factor: Special Issue, NCFTCCPS -

Comparison of parameter estimation algorithms in hydrological modelling

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

THREE PHASE FAULT DIAGNOSIS BASED ON RBF NEURAL NETWORK OPTIMIZED BY PSO ALGORITHM

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

PARTICLE SWARM OPTIMIZATION (PSO)

Parallel Neural Network Training with OpenCL

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION

Research Article Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks

Comparison of Particle Swarm Optimization and Backpropagation Algorithms for Training Feedforward Neural Network

Particle Swarm Optimization applied to Pattern Recognition

LECTURE 16: SWARM INTELLIGENCE 2 / PARTICLE SWARM OPTIMIZATION 2

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

The Pennsylvania State University. The Graduate School. Department of Electrical Engineering COMPARISON OF CAT SWARM OPTIMIZATION WITH PARTICLE SWARM

A Data Classification Algorithm of Internet of Things Based on Neural Network

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Particle swarm optimization for mobile network design

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Algorithm for Classification

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms

Overlapping Swarm Intelligence for Training Artificial Neural Networks

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Face recognition based on improved BP neural network

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com

The Mean-Variance-CVaR model for Portfolio Optimization Modeling using a Multi-Objective Approach Based on a Hybrid Method

PERFORMANCE OF GRID COMPUTING FOR DISTRIBUTED NEURAL NETWORK. Submitted By:Mohnish Malviya & Suny Shekher Pankaj [CSE,7 TH SEM]

A Particle Swarm Optimization Algorithm for Solving Flexible Job-Shop Scheduling Problem

Transactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN

Feature weighting using particle swarm optimization for learning vector quantization classifier

An Improved Backpropagation Method with Adaptive Learning Rate

The Prediction of Real estate Price Index based on Improved Neural Network Algorithm

Rainfall forecasting using Nonlinear SVM based on PSO

Discrete Particle Swarm Optimization for Solving a Single to Multiple Destinations in Evacuation Planning

Particle Swarm Optimization For N-Queens Problem

How to correct and complete discharge data Main text

Using CODEQ to Train Feed-forward Neural Networks

Parameter optimization model in electrical discharge machining process *

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

Artificial Neural Network based Curve Prediction

Automatic Adaptation of Learning Rate for Backpropagation Neural Networks

Chapter 14 Global Search Algorithms

Autonomous SPY : INTELLIGENT WEB PROXY CACHING DETECTION USING NEUROCOMPUTING AND PARTICLE SWARM OPTIMIZATION

A modified shuffled frog-leaping optimization algorithm: applications to project management

Optimized Algorithm for Particle Swarm Optimization

Learning a Nonlinear Model of a Manufacturing Process Using Multilayer Connectionist Networks

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

Particle Swarm Optimization Based Approach for Location Area Planning in Cellular Networks

PARTICLE SWARM OPTIMIZATION APPLICATION IN OPTIMIZATION

A Compensatory Wavelet Neuron Model

Query Learning Based on Boundary Search and Gradient Computation of Trained Multilayer Perceptrons*

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

SWITCHES ALLOCATION IN DISTRIBUTION NETWORK USING PARTICLE SWARM OPTIMIZATION BASED ON FUZZY EXPERT SYSTEM

Artificial Neural Network Training by using Regrouping Particle Swarm Optimization

MODIFIED KALMAN FILTER BASED METHOD FOR TRAINING STATE-RECURRENT MULTILAYER PERCEPTRONS

Research Article Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

Dynamic synthesis of a multibody system: a comparative study between genetic algorithm and particle swarm optimization techniques

Seismic regionalization based on an artificial neural network

SwarmOps for Matlab. Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0

International Conference on Modeling and SimulationCoimbatore, August 2007

Particle swarm algorithms for multi-local optimization A. Ismael F. Vaz 1, Edite M.G.P. Fernandes 1

Adaptively Choosing Neighbourhood Bests Using Species in a Particle Swarm Optimizer for Multimodal Function Optimization

PSO-BP ALGORITHM IMPLEMENTATION FOR MATERIAL SURFACE IMAGE IDENTIFICATION

A NEW EFFICIENT VARIABLE LEARNING RATE FOR PERRY S SPECTRAL CONJUGATE GRADIENT TRAINING METHOD

Nonlinear Prediction of River Water-Stages by Feedback Artificial Neural Network

Application of Multivariate Adaptive Regression Splines to Evaporation Losses in Reservoirs

A Deeper Insight of Neural Network Spatial Interaction Model s Performance Trained with Different Training Algorithms

Multi Layer Perceptron trained by Quasi Newton learning rule

Initializing the Particle Swarm Optimizer Using the Nonlinear Simplex Method

GA is the most popular population based heuristic algorithm since it was developed by Holland in 1975 [1]. This algorithm runs faster and requires les

Neural Networks Laboratory EE 329 A

Parameter Selection of a Support Vector Machine, Based on a Chaotic Particle Swarm Optimization Algorithm

Comparing Classification Performances between Neural Networks and Particle Swarm Optimization for Traffic Sign Recognition

EE 553 Term Project Report Particle Swarm Optimization (PSO) and PSO with Cross-over

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Transcription:

Math. Model. Nat. Phenom. Vol. 5, No. 7, 010, pp. 13-138 DOI: 10.1051/mmnp/01057 Hybrid Particle Swarm and Neural Network Approach for Streamflow Forecasting A. Sedki and D. Ouazar Department of Civil Engineering, Mohammadia School of Engineering University Mohammed V-Agdal, Rabat, Morocco Abstract. In this paper, an artificial neural network (ANN) based on hybrid algorithm combining particle swarm optimization (PSO) with back-propagation (BP) is proposed to forecast the daily streamflows in a catchment located in a semi-arid region in Morocco. The PSO algorithm has a rapid convergence during the initial stages of a global search, while the BP algorithm can achieve faster convergent speed around the global optimum. By combining the PSO with the BP, the hybrid algorithm referred to as BP-PSO algorithm is presented in this paper. To evaluate the performance of the hybrid algorithm, BP neural network is also involved for a comparison purposes. The results show that the neural network model evolved by PSO-BP algorithm has a good predictions and better convergence performances. Key words: artificial neural network, particle swarm optimization algorithm, back propagation, daily streamflows, catchment, semi-arid climate AMS subject classification: 78M3 1. Introduction Forecasting of streamflows are vital and important for flood caution, operation of flood-control reservoir, determination of river water potential, production of hydroelectric energy, allocation of domestic and irrigation water in drought seasons, and navigation planning in rivers. Two basic approaches are usually used to deal with; conceptual modelling and black box modelling. Conceptual models attempt to mathematically simulate the sub-processes and physical mechanisms that Corresponding author. E-mail: asedki@emi.ac.ma 13 Article published by EDP Sciences and available at http://www.mmnp-journal.org or http://dx.doi.org/10.1051/mmnp/01057

govern the hydrological cycle. These models are generally non-linear, time-invariant, and deterministic, involving parameters that represent watershed characteristics [3]. Thus, it is often too complex, too demanding in terms of data and parameters requirements and requiring some degree of expertise and experience with the model [1]. In a black box approach a model is applied to identify a direct mapping between inputs and outputs without detailed consideration about the internal structure of the physical processes. One of the black box techniques, which has caught attention of hydrologists in recent past is artificial neural networks (ANN). The most widely used neural network model is the multilayer percepton (MLP), in which the connection weight training is normally completed by a back propagation (BP) learning algorithm [6]. The idea of weight training in MLPs is usually formulated as minimization of an error function, such as mean square error (MSE) between the network output and the target output over all examples, by iteratively adjusting connection weights. The BP algorithm is gradient descent in essence. Despite its popularity as an optimization tool for neural network training, the BP algorithm has several drawbacks. For instance, the training convergence speed is very slow and easy entrapment in a local minimum. Furthermore, the convergent behavior of the BP algorithm depends very much on the choices of initial values of the network connection weights as well as the parameters in the algorithm such as the learning rate and the momentum. These limitations cause the Back-propagation neural network (BPN) to become inconsistent and unpredictable on different applications [], [8]. Therefore, it is necessary to develop an effective technique for optimizing BPN in order to improve its forecasting performance. Recently some investigations into neural network training using Evolutionary algorithms (EAs) have been successfully employed to overcome the inherent limitations of the BP [], [9]. EAs are considered to be capable to reduce the ill effect of the BP algorithm, because they do not require gradient and differentiable information. Among existing EAs, the most well-known branch is genetic algorithms (GAs). Though GA has many advantages over conventional methods, it also has some drawbacks, such as slow rate of convergence and a large number of simulations to arrive at an optimum solution. Recently researchers reported that particle swarm optimization (PSO) [] is showing some improvement on these issues, which use the local and global search capabilities of the algorithm, to find better quality solutions with less computational time [7]. In this paper, the hybrid PSO-BP algorithm is employed to train multilayer perceptrons for daily flow forecasting at a Semi-arid catchment in Morocco. This algorithm uses the PSO algorithm to do global search in the beginning of stage, and then uses the BP algorithm to do local search around the global optimum. 133

. Adaptation of PSO to network training The PSO algorithm is an adaptive algorithm based on a social-psychological metaphor. PSO is similar to the GA in that it begins with a random population matrix. The rows in the matrix are called particles (same as the GA chromosome). They contain the variable values and are not binary encoded. Each particle of the population has an adaptable velocity (position change), according to which it moves in the search space. Moreover, each particle has a memory, remembering the best position of the search space it has ever visited []. When the PSO algorithm is used in evolving weights of ANN, each particle represents all weights of a ANN structure. The representation of the connection weight matrix of the i-th particle is as follows: { } W i = W [1] i, W [] i, (.1) where W [1] i and W [] represent the connection weight matrix of the i-th particle between the input layer and the hidden layer, and that between the hidden layer and the output layer, respectively. Moreover, the vector of the position of the previous best fitness value of any particle is represented by P i = { } P [1] i, P [] i, (.) where P [1] i and P [] i represent the position of the previous best fitness value of the i-th particle, between the input layer and the hidden layer, and that between the hidden layer and the output layer, respectively. The index of the best particle among all the particles in the population is represented by the symbol g. So the best matrix is represented by P g = { } P g [1], P g [], (.3) where P g [1] and P g [] represent the position of the best particle among all the particles, between the input layer and the hidden layer, and that between the hidden layer and the output layer, respectively. The velocity of the particle i is denoted by { } V i = V [1] i, V [] i. (.) If m and n represent the index of matrix row and column, respectively, the manipulation of the particles are as follows: V [j] i (m, n) = χ [ ωv [j] ( [j] i (m, n) + c 1 r 1 P i (m, n) W [j] i (m, n) ) (.5) ( +c r P [j] g (m, n) W [j] i (m, n) )], (.6) and W [j] i = W [j] i + V [j] i, (.7) 13

where j=1, ; m=1,, M j ; n=1,, N j ; M j and N j are the row and column sizes of the matrices W, P, and V ; c 1 and c are two positive constants named as learning factors, r 1 and r are random numbers in the range of (0,1). ω is called inertia weight and and χ is a constriction factor which is used, alternatively to ω to limit velocity. Eq. (.7) is used to calculate the particle s new velocity according to its previous velocity and the distances of its current position from its own best position and the group s best position. Then, the particle flies toward a new position according to Eq. (.8). Proper fine-tuning of the parameters c 1 and c, in Eq. (.7), may result in faster convergence of the algorithm, and alleviation of the problem of local minima. The role of inertial weight ω, in Eq. (.7), is to control the impact of the previous velocities on the current one. A large inertial weight facilitates global exploration (searching new areas), while a small weight tends to facilitate local exploration. Hence selection of a suitable value for the inertial weight ω usually helps in reduction of the number of iterations required to locate the optimum solution [5]. The variable ω is updated according to maxiter iter ω =, (.8) maxiter where iter is the current iteration number and maxiter is the maximum number of allowable iterations. 3. Hybrid PSO-BP algorithm The PSO-BP is an optimization algorithm combining the PSO with the BP for weight training of multi-layer neural network. The PSO algorithm is a global algorithm, which has a strong ability to find global optimal result, this PSO algorithm, however, has a disadvantage that the search around global optimum is very slow. The BP algorithm, on the contrary, has a strong ability to find local optimal result, but its ability to find the global optimal result is weak. The fundamental idea for this hybrid algorithm is that at the beginning stage of searching for the optimum, the PSO is employed to accelerate the training speed. When the fitness function value has not changed for some iterations, or value changed is smaller than a predefined number, the searching process is switched to gradient descending searching according to this heuristic knowledge.. Application Case Data used in this paper are from the Ourika basin, located in semi-arid region of Marrakech and is the most important sub-catchment of Tensift basin drainage. The basin area is 503 km, and its mean annual rainfall is approximately 800 mm. The daily streamflow at Aghbalou station is used for model investigation. The data contains information for a period of eight years (1996 to 003). Furthermore, data from 1996 to 00 constitute the training set and the 365 remaining data are used in the testing phase. 135

5. Results and discussion To forecast the runoff for day t, the input layer consists of 3 nodes representing the daily runoff values at times t 1, t and t 3. Figure 1 shows comparison of results between the computed and observed flow during the testing period of 003. Figure 1(a) shows the result of forecasting using PSO-PB algorithm. In addition Figure 1(b), shows the result of forecasting using PB algorithm. PSO-PB algorithm was trained by 30 generations, followed by a BP training. The results produced by the ANN based on PSO-BP algorithm are more close to the original data than those by BP neural network. To evaluate the models performance, three criteria are adopted: Mean Square Error (MSE), efficiency coefficient R and mean relative error (MRE). Table gives the performance comparisons for the two different models of the testing phase. Apparently, the hybrid algorithm generates more accurate predictions and exhibits better performance than the PB algorithm. The convergence processes for ANN based on PSO-BP algorithm is shown in Figure. Figure a shows the best MSE corresponding to each generation during the PSO algorithm training. From this figure, we can see the variation of best MSE represented by the best particle with the number of generations of the PSO algorithm. After 10 generations of execution, the change of best MSE has become slower because of the around global optimum. After 30 generations the PSO-BP algorithm was switched to the gradient descending method based on BP algorithm to search for 6000 iterations. Figure b shows the MSE corresponding to each iteration during the PB training. This figure shows that, although PSO has greatly reduced the total MSE of the neural network, a more improvement of training performance could still be achieved by applying a back propagation weight adjustment procedure. Table shows the training performance at various fitness evaluation times for both the standard PB and the hybrid PSO-PB. The maximum iteration number for each algorithm is set to 6000. The MSE reaches 0.0016 after 10 times of iteration, which is much faster than for BP algorithm, where the MSE does not reach 0.0016 in 6000 iterations and the minimum MSE available by PSO-BP can be 0.0013 in 500 times of iteration. Thus, the PSO-BP algorithm presents better convergence performance than the BP algorithm. 6. Conclusion This paper presents the application of a hybrid PSO algorithm for training ANNs to forecast the daily streamflows in a catchment located in a semi-arid climate in Morocco. The hybrid algorithm combining the PSO algorithm s strong ability in global search with PB algorithm s strong ability in local search. It is shown from the training and verification simulation that the streamflow forecasting results are more accurate and present better convergence performance when compared with the conventional BP neural network. References [1] Q. Duan, S. Sorooshian, V. Gupta. Effective and efficient global optimization for conceptual rainfall runoff models. Water Resour. Res, 8 (199), 1015-1031 136

18 16 1 Observed ---------- Predicted 18 16 1 Observed ---------- Predicted 1 1 3 Flow (m /s) 10 8 6 3 Flow (m /s) 10 8 6 (a) 0 0 50 100 150 00 50 300 350 00 Time (days) PSO-BP algorithm (b) 0 0 50 100 150 00 50 300 350 00 Time (days) BP algorithm Figure 1: Comparison between predicted and measured flow values at testing phases. x 10 3 x 10 3 MSE 1 1 10 8 6 best particle MSE.5 3.5 3.5 1.5 1 0.5 (a) 0 5 10 15 0 5 30 Generation Best MSE by PSO algorithm (b) 0 1000 000 3000 000 5000 6000 Iteration MSE during BP training Figure : Neural network training performance for PSO-BP algorithm. Algorithm R MSE MRE PSO-BP 0.96 0.0019 0.006 BP 0.93 0.0030 0.08 Table 1: Comparison of PSO-BP and BP neural networks at testing phase. Fitness valuation time MSE PSO-BP BP 10 0.0016 0.0089 000 0.001 0.003 500 0.0013 0.0017 6000 0.0013 0.0017 Table : Convergence comparison of the PSO-BP and BP neural networks. 137

[] R. C. Eberhart and J. Kennedy. A new optimizer using particle swarm theoryproc. of 6th Symp. on micro machine and human science, IEEE service center, Piscataway, N.J., (1995), 39-3. [3] K.L. Hsu, H.V. Gupta, S. Sorooshian. Artificial neural network modeling of the rainfallrainoff process. Water Resour. Res., 31 (1995), No. 10, 517-530. [] V. Maniezzo. Genetic evolution of the topology and weight distribution of neural networks. IEEE Transaction on Neural Networks, 5 (199), 39 53. [5] K.E. Parsopoulos, M.N. Vrahatis. Recent approaches to global optimization problems through particle swarm optimization. Natural Comput., 1 (00), No. 3, 35-306. [6] D.E. Rumelhart, G.E. Hinton, R.J. Williams. Learning internal representation by error propagation. In: Rumelhart, D.E., McClelland, J.L. (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1. MIT Press, Cambridge, MA, (1986), 318-36. [7] A. Salman, I. Ahmad, S. Al-Madani. Particle swarm optimization for task assignment problem. Microproc. and Microsyst., 6 (00), No. 8, 363-371. [8] R. S. Sexton, R. E. Dorsey, J. D. Johnson. Toward global optimization of neural networks: A comparison of the genetic algorithm and back propagation. Decision Support Systems, (1998), 171 185. [9] J.M. Yang, C.Y. Kao. A robust evolutionary algorithm for training neural networks. Neural Comput. Appl., 10 (001), 1 30. 138