Iterative Improvement of the Multiplicative Update NMF Algorithm using Nature-inspired Optimization

Size: px
Start display at page:

Download "Iterative Improvement of the Multiplicative Update NMF Algorithm using Nature-inspired Optimization"

Transcription

1 2011 Seventh International Conference on Natural Computation Iterative Improvement of the Multiplicative Update NMF Algorithm using Nature-inspired Optimization Andreas Janecek Key Laboratory of Machine Perception (MOE), and Department of Machine Intelligence; School of Electronics Engineering and Computer Science, Peking University, Beijing, , China Ying Tan Key Laboratory of Machine Perception (MOE), and Department of Machine Intelligence; School of Electronics Engineering and Computer Science, Peking University, Beijing, , China Abstract Low-rank approximations of data (e. g. based on the Singular Value Decomposition) have proven very useful in various data mining applications. The Non-negative Matrix Factorization (NMF) leads to special low-rank approximations which satisfy non-negativity constraints. The Multiplicative Update (MU) algorithm is one of the two original NMF algorithms and is still one of the fastest NMF algorithms per iteration. Nevertheless, MU demands a quite large number of iterations in order to provide an accurate approximation of the original data. In this paper we present a new iterative update strategy for the MU algorithm based on nature-inspired optimization algorithms. The goal is to achieve a better accuracy per runtime compared to the standard version of MU. Several properties of the NMF objective function underlying the MU algorithm motivate the utilization of heuristic search algorithms. Indeed, this function is usually non-differentiable, discontinuous, and may possess many local minima. Experimental results show that our new iterative update strategy for the MU algorithm achieves the same approximation error than the standard version in significantly fewer iterations and in faster overall runtime. I. INTRODUCTION The Non-negative Matrix Factorization (NMF, [13]) leads to a low-rank approximation which satisfies non-negativity constraints. Contrary to other low-rank approximations these constraints may improve the sparseness of the factors and due to the additive parts-based representation also improve interpretability [2], [8], [13]. NMF consists of reduced rank nonnegative factors W R m k and H R k n with k min{m, n} that approximate a matrix A R m n by A W H. NMF requires all entries in A, W and H to be zero or positive. Due to its nonnegativity constraints, NMF produces so-called additive parts-based representations of the data (in contrast to other linear representations such as SVD). This is an impressive benefit of NMF, since it makes the interpretation of the NMF factors much easier than for factors containing positive and negative entries. The Multiplicative Update (MU) algorithm is one of the two original NMF algorithms presented in [13] and is still one of the fastest NMF algorithms per iteration. Hence, it can be used to achieve a rough but very fast approximation [7]. Over the last decades, nature-inspired metaheuristic algorithms have gained much popularity due to their applicability for various optimization problems. The main advantage of these algorithms is the fact that they are able to find acceptable results within a reasonable amount of time for many complex, large and dynamic problems. Although they lack the ability to guarantee the optimal solution for a given problem, it has been shown that they are able to tackle various kinds of real-world optimization problems [5]. The goal of this paper is to iteratively improve the approximation quality of the MU algorithm by utilizing natureinspired metaheuristcs. This improvement is achieved by optimizing selected rows of W and/or selected columns of H, respectively, during the first iterations of the factorization. The idea of using optimization heuristics is strengthened by several properties of the NMF objective function underlying the MU algorithm. This function is usually non-differentiable, discontinuous, and may possess many local minima. Since metaheuristic algorithms are know to be able to deal well with such difficulties they seem to be a promising choice for the optimization process. The presented iterative update strategy is successful if it is able to achieve a better approximation error than the standard version of the MU algorithm in faster overall runtime. The iterative update process of the MU algorithm and the sequential optimization of selected rows of W and/or selected columns of H in each iteration allow for parallel and/or distributed computation. The tasks of optimizing different rows of W and different columns of H can be split up into several partly indepent sub-tasks and can thus be executed concurrently. Related Work. In 1994 Paatero and Tapper [16] published an early article on positive matrix factorization, but the work by Lee and Seung [13] five years later achieved much more popularity and is known as a standard reference for NMF. The original Multiplicative Update algorithm introduced in [13] provides a good baselines against which other algorithms (e. g. the Alternating Least Squares algorithm [16], the Gradient Descent algorithm [15], ALSPGRAD [15], quasi Newton-type NMF [11], fastnmf and bayesnmf [19], etc.) have to be judged. Although other algorithms are often able to achieve a better approximation at convergence, the MU algorithm is still the fastest NMF algorithm per iteration and a good choice if a very fast and rough approximation is needed [7] /11/$ IEEE 1668

2 Only few studies can be found in the literature that aim at combining NMF and metaheuristics, most of them are based on genetic Algorithms (GAs). In [21], the authors have investigated the application of GAs on sparse NMF for microarray analysis, while [20] have applied GAs for boolean matrix factorization, a variant of NMF for binary data based on Boolean algebra. However, their work differs significantly from the techniques introduced in this paper. In a preceding study we successfully applied nature-inspired algorithms as initialization enhancers for NMF [9]. We are not aware of any studies that investigate iterative update strategies for NMF algorithms using nature-inspired metaheuristics. Notation: A matrix is represented by an uppercase italic letter (A, B, Σ,... ), a vector by a lowercase bold letter (u, x, q 1,... ), and a scalar by a lowercase Greek letter (λ, µ,... ). The i th row vector of a matrix D is represented as d r i, and the j th column vector of D as d c j. II. REVIEW OF METHODS Non-negative Matrix Factorization (NMF). Low-rank approximations replace a data matrix with a related matrix of much lower rank in order to reduce the required storage space and/or to achieve a more efficient representation of the relationship between data elements. Although there are several well known techniques (e.g., the SVD), a main problem of most techniques refers to the interpretability of the transformed features after the approximation. The resulting orthogonal matrix factors generated by the approximation are often difficult to interpret because they contain positive and negative coefficients. A negative quantification is often meaningless in the application domain and the information about how much an original attribute contributes is lost. The NMF leads to special low-rank approximations which satisfy non-negativity constraints. These constraints require that all entries in A, W and H are zero or positive. This makes the interpretation of the NMF factors much easier than for factors containing positive and negative entries, and enables NMF a non-subtractive combination of parts to form a whole [13]. Optimization Problem. The nonlinear optimization problem underlying NMF can generally be stated as 1 min f(w, H) = min W,H W,H 2 A W H 2 F. (1) The Frobenius norm. F is commonly used to measure the error between the original data A and the approximation W H (other measures, e.g. based on the Kullback-Leibler divergence, are also possible [14]). Unlike the SVD, the NMF is not unique, and convergence is not guaranteed for all NMF algorithms. If they converge, then usually to local minima only (potentially different ones for different algorithms). Nevertheless, the data compression achieved with only local minima has been shown to be of desirable quality for many data mining applications [12]. Algorithms for computing NMF are iterative and require initialization of W and H. A proper initialization can lead to faster error reduction and better overall error at convergence. Although the benefits of good initialization techniques are well known in the literature, most studies use random initialization [3]. A possible problem of most initialization techniques is the fact that the initialization procedure can be rather costly in terms of runtime [8]. The Multiplicative Update Algorithm. The general structure of the MU algorithm is given in Alg. 1. Usually, W and H are initialized randomly and the whole algorithm is repeated several times (maxrepetition). In each repetition, the update steps of the MU algorithm are processed until a maximum number of iterations is reached (maxiter). If the approximation error drops below a pre-defined threshold, or if the shift between two iterations is very small, the algorithm might stop before all iterations are processed. The update steps taken from [14] are based on the mean squared error objective function. The parameter ε in each iteration is a small value (usually ε = 10 9 ) used to avoid division by zero. The divisions in Algorithm 1 are to be performed element-wise. Comments about the convergence of the MU algorithm can be found, for example, in [2], [15]. Given matrix A R m n and k min{m, n}; for rep = 1 to maxrepetition do W = rand(m, k); H = rand(k, n); for iter = 1 to maxiter do % perform MU specific update steps W = W (AH )./(W HH + ε); H = H (W A)./(W W H + ε); check termination criterion; Algorithm 1: General Structure of the MU Algorithm Nature-inspired Metaheuristics. We use five nature-inspired optimization heuristics for iteratively improving the approximation quality of the MU algorithm. Details about the heuristics can be found in the references provided below. Genetic Algorithms (GA, [6]) are adaptive heuristic search algorithms based on evolutionary processes such as natural selection, mutation, and crossover. Particle Swarm Optimization (PSO, [10]) is inspired by the social behavior of swarms. Particles adjust their position in the search-space based on their own best history position as well as on the global best position found so far. Differential Evolution (DE, [18]), is a simple yet effective population based function minimizer, which moves particles around in the search-space using simple mathematical formula. Fish School Search (FSS, [1]) mimics the movement of fish schools. The main operators are feeding (fish can gain/loose weight, deping on the region they swim in) and swimming. Fireworks Algorithm (FWA, [22]) is a recently developed metaheuristic that simulates the explosion process of fireworks. 1669

3 III. METAHEURISTICS FOR OPTIMIZING NMF FACTORS We use the Frobenius norm (1) as NMF objective function for the MU algorithm to measure the error between A and W H, because it offers some properties which can be utilized to apply metaheuristics for optimizing NMF. The Frobenius norm of a matrix D is defined as D F = ( d ij 2 ) 1 /2, where d ij is the element in the i th row and j th column of D. Since a reduction of the Frobenius norm of any row or any column of D leads to a reduction of the total Frobenius norm D F, the Frobenius norm of D can also be computed row wise, by summing up the norms of all row vectors d r i of D, or column wise, by summing up the norms of all column vectors d c j of D. Hence, we are interested in reducing the norm of selected rows and/or columns of D, which is the distance matrix between the original data matrix A and the approximation W H, such that D = A W H. In order to improve the approximation as fast as possible we identify rows of D with highest norm (i.e. the approximation of this row is worse than for other rows of D) and optimize the corresponding rows of W. The same procedure is used to identify the columns of H that should be optimized. for iter = 1 to maxiter do W = W (AH )./(W HH + ε); H = H (W A)./(W W H + ε); if (iter < m) then % Update rows of W d r i is the i th row vector of D = A W H; [V al, IX W ]= sort(norm(d r i ), desc ); IX W = IX W (1 : c); i IX W : minimize a r i wr i H F ; W (i, :) = w r i ; % Update columns of H d c j is the j th column vector of D = A W H; [V al, IX H] = sort(norm(d c j), desc ); IX H = IX H(1 : c); j IX H: minimize a c j W hc j F ; H(:, j) = h c j; c = c c check termination criterion; Algorithm 3: Iterative Optimization of the MU Algorithm The iterative optimization procedure for the MU algorithm using metaheuristics is summarized in Alg. 3 (which exts the inner loop of Alg. 1). Variables used in Alg. 3: m: number of iterations in which the optimization using metaheuristics is applied. For example, if m = 5, the optimization is applied in the first 5 iterations. If iter >= m, only the standard MU update steps are executed. c: the number of rows and/or columns that are optimized in the current iteration. For example, if c = 10, the 10 row vectors of D having the highest norm are identified and the 10 corresponding rows of W are optimized. c: the value of c is decreased by c in each iteration. c = round(c initial /m) Functions used in Algorithm 3: [V al, IX W ] = sort(norm(d r i ), desc ): returns the values (V al) and the corresponding indizes (IX W ) of the norm of all row vectors d r i of D in descing order. IX W = IX W (1 : c): returns only the first c elements of the vector IX W. minimize a r i wr i H F : in this line a r i (the ith row of A) and H are used as input parameters for the optimization algorithms, the output is the optimized row vector w r i. W (i, :) = w r i : the ith row of W is replaced with w r i. Updating columns of H is similar to updating the rows of W. The dimension of the optimization problem is identical to the rank k of the NMF and is given by the first dimension of H and the second dimension of W, respectively. Parallelism. As the metaheuristics have relatively high computational cost compared to the simple MU update steps it is necessary to parallelize the optimization tasks. Since all rows of W are indepent from each other the optimization of any row of W does not influence the optimization of any other row of W (identical for columns of H). This allows for a parallel implementation of the proposed method. IV. EXPERIMENTAL EVALUATION The software used for the experiments in this paper was written in Matlab. The MU implementation is based on the nnmf() function included in Matlab s Statistic Toolbox since version v6.2. For the optimization algorithms, we adapted or implemented the following Matlab codes: For PSO and DE we adapted the codes from [17] for our needs. For GA we adapted the continuous genetic algorithm available in the appix of [6]. Due to space limitations and in order to provide unbiased evaluation results we present results achieved with a randomly created, square matrix. Parameter Setup. All heuristics use a population size of ten and run for ten iterations (100 fitness evaluations). These parameters were chosen to be significantly smaller than for most other optimization problems due to runtime limitations a too expensive optimization may outweigh the performance gain of the proposed method. The dimension of the optimization problem is identical to the rank k of the NMF. The upper/lower bound of the search space was set to the interval [0, 4] and upper/lower bound of the initialization to [0, max(a)]. Algorithm specific parameters: GA: mutation rate of 0.5; selection rate of 0.65 PSO: following [4] ϕ = 4.1, and c 1 = c 2 = 2.05 DE: crossover probability (pc) set to upper limit 1 FSS: step ind initial =1, step ind final =0.001, W scale =10 FWA: number of sons (sonnum) set to

4 Hardware. Runtimes were measured on a SUN Fire X4600 M2 with eight 3.2GHz quad-core processors and 32GB RAM. We implemented all algorithms in Matlab exploiting Matlab s parallel computing potential. This allows for parallelizing the optimization process over up to 32 workers (each Matlab worker runs on one core). The runtimes presented in this section are based on this parallel implementation. the metaheuristics we applied our optimization procedure here only on the rows of W, while the columns in H remained unchanged. Experiments showed that with this setting the loss in accuracy compared to optimizing both, W and H, is relatively small while the runtime can be increased significantly. m was set to 2 which indicates that the optimization is only applied in the first two iterations (compared to m = 10 in Fig. 1). All optimization algorithms are still able to achieve a much faster reduction of the approximation error in terms of accuracy per iteration, although the values of the parameters m and c were chosen to be very small. Fig. 1. Accuracy per iteration when updating W and H; m=10, c=20, k=5. Accuracy per Iteration. Fig. 1 shows the accuracy per iteration for the basic MU algorithm and the proposed optimization method based on the different metaheuristics when optimizing both factors W and H. Results in this figure were achieved with m set to 10 and c set to 20 (Section III). The results show that the approximation error per iteration can be reduced significantly if metaheuristics are used. However, Fig. 1 does not take into account the overall runtime needed to achieve a given accuracy. Actually, the computational expenses for optimizing rows of W and columns of H using the settings mentioned above are rather large and overwhelm the benefits of the faster error reduction per iteration. Fig. 2. Accuracy per iteration when updating only W ; m=2, c=20, k=5. Fig. 2 shows similar information as Fig. 1 for different parameter setting. Due to the relatively high computational cost of Fig. 3. Proportional runtimes for achieving the same accuracy as basic MU after 15 (30) iterations. Only rows of W are updated; m=2, c=20, k=2. Accuracy per Runtime. Fig. 3 shows the reduction in runtime when the same accuracy as for basic MU after 15 (30) iterations should be achieved (for k = 2). The proposed iterative optimization strategy only needs about 60% of the runtime to achieve the same accuracy as basic MU after 15 iterations, and between 40% and 50% of the runtime to achieve the same accuracy as basic MU after 30 iterations (deping on the applied meta-heuristic). In Fig. 3, the runtime of basic MU sets the baseline (1 = 100%), the runtimes of the meta-heuristics are given as speedup over basic MU: t opt XX /t Basic MU Table I compares the number of iterations needed to achieve a given accuracy as well as the necessary runtime to achieve a given accuracy for different values of rank k and the same parameters as in Fig. 2 and Fig. 3. It can be interpreted as follows: The most-left column is the abbreviation of the optimization algorithm. The first row of any optimization algorithm shows the number of iterations needed to achieve the same accuracy as the basic version of MU for a given rank k when using the proposed iterative optimization update strategy. E.g. for rank k = 2, the iterative optimization of MU using DE as optimization algorithm needs only 4 iterations to achieve the same accuracy as the basic MU algorithm after 15 iterations (7 iterations for k = 3, 8 iterations for k = 5, and so on...). The second row of any optimization algorithm shows the overall runtime needed to achieve the same accuracy as the basic version of MU for a given rank k. For example, for rank k = 2, the iterative optimization of MU using DE as optimization algorithm needs only 0.58 (cf. Fig. 3!) of the amount of time as the basic version of MU (0.69 of the runtime of basic MU for k = 3, 0.80 for k = 5, and so on...). The best result for a given rank k is highlighted in bold letters. The third and fourth row of any optimization algorithm show 1671

5 TABLE I COMPARISON OF ITERATION PER ACCURACY AND RUNTIME PER ACCURACY FOR DIFFERENT VALUES OF RANK k. (m=2, c=20, k=5) DE k = 2 k = 3 k = 5 k = 7 k = 10 acc(15 iters) t(15) = acc(30 iters) t(30) = FSS k = 2 k = 3 k = 5 k = 7 k = 10 acc(15 iters) t(15) = acc(30 iters) t(30) = FWA k = 2 k = 3 k = 5 k = 7 k = 10 acc(15 iters) t(15) = acc(30 iters) t(30) = GA k = 2 k = 3 k = 5 k = 7 k = 10 acc(15 iters) t(15) = acc(30 iters) t(30) = PSO k = 2 k = 3 k = 5 k = 7 k = 10 acc(15 iters) t(15) = acc(30 iters) t(30) = the similar information for 30 iterations. Summarizing, the proposed optimization strategy is able to clearly decrease the overall runtime per accuracy compared to the basic version of the MU algorithm. With increasing k the reduction of runtime ts to decrease. For rank k > 15 our method is slower than the basic MU algorithm. This is caused by the higher computational cost of the optimization algorithms for larger problem dimension. When comparing the optimization algorithm, no big gap between them can be found. Generally, DE achieved slightly better results as the other metaheuristics, for small rank (k = 2 or 3), FSS and PSO achieve the best resutls. V. CONCLUSION In this paper we presented a new iterative update strategy for the MU algorithm based on nature-inspired optimization algorithms. The proposed update strategy allows for efficiently computing the optimization of single rows of W and/or single columns of H in parallel. Our results show that especially for small rank k our method achieves the same approximation error than the standard version of MU in significantly fewer iterations and in faster overall runtime. This indicates the applicability of our method as a rough and very fast approximation method. We are currently exting our study on other NMF algorithms in order to further utilize the potential of the proposed update strategy. Acknowledgments. This work was supported by National Natural Science Foundation of China (NSFC), Grant No Andreas wants to thank the Erasmus Mundus External Coop. Window, Lot 14 ( / ECW). REFERENCES [1] C. BASTOS FILHO, F. LIMA NETO, M. SOUSA, M. PONTES, AND S. MADEIRO, On the influence of the swimming operators in the fish school search algorithm, in SMC 2009: Proceedings of Systems, Man and Cybernetics., 2009, pp [2] M. W. BERRY, M. BROWNE, A. N. LANGVILLE, P. V. PAUCA, AND R. J. PLEMMONS, Algorithms and applications for approximate nonnegative matrix factorization, Computational Statistics & Data Analysis, 52 (2007), pp [3] C. BOUTSIDIS AND E. GALLOPOULOS, SVD based initialization: A head start for nonnegative matrix factorization, Pattern Recogn., 41 (2008), pp [4] D. BRATTON AND J. KENNEDY, Defining a standard for particle swarm optimization, in Swarm Intelligence Symposium, SIS IEEE, 2007, pp [5] R. CHIONG, ed., Nature-Inspired Algorithms for Optimisation, vol. 193 of Studies in Computational Intelligence, Springer, [6] R. L. HAUPT AND S. E. HAUPT, Practical Genetic Algorithms (2nd ed.):, John Wiley & Sons, Inc., [7] A. JANECEK, S. SCHULZE-GROTTHOFF, AND W. N. GANSTERER, libnmf - a library for nonnegative matrix factorizatrion, Computing and Informatics, 22 (2011). [8] A. G. JANECEK AND W. N. GANSTERER, Utilizing nonnegative matrix factorization for classification problems, in Survey of Text Mining III: Application and Theory, M. W. Berry and J. Kogan, eds., John Wiley & Sons, Inc., [9] A. G. JANECEK AND Y. TAN, Using population based algorithms for initializing nonnegative matrix factorization, in to appear in ICSI 2011: Second International Conference on Swarm Intelligence, [10] J. KENNEDY AND R. EBERHART, Particle swarm optimization, in Proceedings of IEEE International Conference on Neural Networks, vol. 4, 1995, pp [11] H. KIM AND H. PARK, Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method, SIAM J. Matrix Anal. Appl., 30 (2008), pp [12] A. N. LANGVILLE, C. D. MEYER, AND R. ALBRIGHT, Initializations for the nonnegative matrix factorization, in SIGKDD 06: 12th ACM Int. Conf. on Knowledge Discovery and Data Mining, [13] D. D. LEE AND H. S. SEUNG, Learning parts of objects by non-negative matrix factorization., Nature, 401 (1999), pp [14], Algorithms for non-negative matrix factorization, Advances in Neural Information Processing Systems, 13 (2001), pp [15] C.-J. LIN, Projected gradient methods for nonnegative matrix factorization, Neural Comput., 19 (2007), pp [16] P. PAATERO AND U. TAPPER, Positive matrix factorization: A nonnegative factor model with optimal utilization of error estimates of data values, Environmetrics, 5 (1994), pp [17] M. E. H. PEDERSEN, SwarmOps. Online: projects/swarmops/matlab, 04/2010, [18] K. V. PRICE, R. M. STORN, AND J. A. LAMPINEN, Differential Evolution A Practical Approach to Global Optimization, Springer, [19] M. N. SCHMIDT AND H. LAURBERG, Non-negative matrix factorization with Gaussian process priors, Comp. Intelligence and Neuroscience, 2008 (2008), pp [20] V. SNÁSEL, J. PLATOS, AND P. KRÖMER, Developing genetic algorithms for boolean matrix factorization, in DATESO 2008, [21] K. STADLTHANNER, D. LUTTER, F. THEIS, AND ET AL., Sparse nonnegative matrix factorization with genetic algorithms for microarray analysis, in IJCNN 2007: Proceedings of the International Joint Conference on Neural Networks, 2007, pp [22] Y. TAN AND Y. ZHU, Fireworks algorithm for optimization, in Advances in Swarm Intelligence, Y. Tan, Y. Shi, and K. Tan, eds., vol of Lecture Notes in Computer Science, Springer, 2010, pp

Feeding the Fish Weight Update Strategies for the Fish School Search Algorithm

Feeding the Fish Weight Update Strategies for the Fish School Search Algorithm Feeding the Fish Weight Update Strategies for the Fish School Search Algorithm Andreas Janecek and Ying Tan Key Laboratory of Machine Perception (MOE), Peking University Department of Machine Intelligence,

More information

Classification Based on NMF

Classification Based on NMF E-Mail Classification Based on NMF Andreas G. K. Janecek* Wilfried N. Gansterer Abstract The utilization of nonnegative matrix factorization (NMF) in the context of e-mail classification problems is investigated.

More information

Feeding the Fish Weight Update Strategies for the Fish School Search Algorithm. Andreas Janecek

Feeding the Fish Weight Update Strategies for the Fish School Search Algorithm. Andreas Janecek Feeding the Fish Weight for the Fish School Search Algorithm Andreas Janecek andreas.janecek@univie.ac.at International Conference on Swarm Intelligence (ICSI) Chongqing, China - Jun 14, 2011 Outline Basic

More information

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization J.Venkatesh 1, B.Chiranjeevulu 2 1 PG Student, Dept. of ECE, Viswanadha Institute of Technology And Management,

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

Binary Differential Evolution Strategies

Binary Differential Evolution Strategies Binary Differential Evolution Strategies A.P. Engelbrecht, Member, IEEE G. Pampará Abstract Differential evolution has shown to be a very powerful, yet simple, population-based optimization approach. The

More information

Using CODEQ to Train Feed-forward Neural Networks

Using CODEQ to Train Feed-forward Neural Networks Using CODEQ to Train Feed-forward Neural Networks Mahamed G. H. Omran 1 and Faisal al-adwani 2 1 Department of Computer Science, Gulf University for Science and Technology, Kuwait, Kuwait omran.m@gust.edu.kw

More information

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization 2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic

More information

Parallel Evaluation of Hopfield Neural Networks

Parallel Evaluation of Hopfield Neural Networks Parallel Evaluation of Hopfield Neural Networks Antoine Eiche, Daniel Chillet, Sebastien Pillement and Olivier Sentieys University of Rennes I / IRISA / INRIA 6 rue de Kerampont, BP 818 2232 LANNION,FRANCE

More information

SwarmOps for Matlab. Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0

SwarmOps for Matlab. Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0 Numeric & Heuristic Optimization Source-Code Library for Matlab The Manual Revision 1.0 By Magnus Erik Hvass Pedersen November 2010 Copyright 2009-2010, all rights reserved by the author. Please see page

More information

Ying TAN. Peking University, China. Fireworks Algorithm (FWA) for Optimization. Ying TAN. Introduction. Conventional FWA.

Ying TAN. Peking University, China. Fireworks Algorithm (FWA) for Optimization. Ying TAN. Introduction. Conventional FWA. () for () for Based Peking University, China 1 / 92 Contents () for Based 1 2 3 4 Based 5 6 7 2 / 92 Definition of Swarm Intelligence () for Based Swarm intelligence is an artificial intelligence technique

More information

A Hybrid Fireworks Optimization Method with Differential Evolution Operators

A Hybrid Fireworks Optimization Method with Differential Evolution Operators A Fireworks Optimization Method with Differential Evolution Operators YuJun Zheng a,, XinLi Xu a, HaiFeng Ling b a College of Computer Science & Technology, Zhejiang University of Technology, Hangzhou,

More information

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic

Reconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic Bulletin of Environment, Pharmacology and Life Sciences Bull. Env. Pharmacol. Life Sci., Vol 4 [9] August 2015: 115-120 2015 Academy for Environment and Life Sciences, India Online ISSN 2277-1808 Journal

More information

SwarmOps for Java. Numeric & Heuristic Optimization Source-Code Library for Java The Manual Revision 1.0

SwarmOps for Java. Numeric & Heuristic Optimization Source-Code Library for Java The Manual Revision 1.0 Numeric & Heuristic Optimization Source-Code Library for Java The Manual Revision 1.0 By Magnus Erik Hvass Pedersen June 2011 Copyright 2009-2011, all rights reserved by the author. Please see page 4 for

More information

SwarmOps for C# Numeric & Heuristic Optimization Source-Code Library for C# The Manual Revision 3.0

SwarmOps for C# Numeric & Heuristic Optimization Source-Code Library for C# The Manual Revision 3.0 Numeric & Heuristic Optimization Source-Code Library for C# The Manual Revision 3.0 By Magnus Erik Hvass Pedersen January 2011 Copyright 2009-2011, all rights reserved by the author. Please see page 4

More information

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems Australian Journal of Basic and Applied Sciences, 4(8): 3366-3382, 21 ISSN 1991-8178 Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems Akbar H. Borzabadi,

More information

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

International Journal of Digital Application & Contemporary research Website:   (Volume 1, Issue 7, February 2013) Performance Analysis of GA and PSO over Economic Load Dispatch Problem Sakshi Rajpoot sakshirajpoot1988@gmail.com Dr. Sandeep Bhongade sandeepbhongade@rediffmail.com Abstract Economic Load dispatch problem

More information

PARTICLE Swarm Optimization (PSO), an algorithm by

PARTICLE Swarm Optimization (PSO), an algorithm by , March 12-14, 2014, Hong Kong Cluster-based Particle Swarm Algorithm for Solving the Mastermind Problem Dan Partynski Abstract In this paper we present a metaheuristic algorithm that is inspired by Particle

More information

An Empirical Study on Influence of Approximation Approaches on Enhancing Fireworks Algorithm

An Empirical Study on Influence of Approximation Approaches on Enhancing Fireworks Algorithm IEEE International Conference on Systems, Man, and Cybernetics October 4-7,, COEX, Seoul, Korea An Empirical Study on Influence of Approximation Approaches on Enhancing Fireworks Algorithm Yan Pei, Shaoqiu

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

Algebraic Iterative Methods for Computed Tomography

Algebraic Iterative Methods for Computed Tomography Algebraic Iterative Methods for Computed Tomography Per Christian Hansen DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Per Christian Hansen Algebraic

More information

COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES

COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES COMPSTAT 2004 Symposium c Physica-Verlag/Springer 2004 COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES Tvrdík J. and Křivý I. Key words: Global optimization, evolutionary algorithms, heuristics,

More information

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm A. Lari, A. Khosravi and A. Alfi Faculty of Electrical and Computer Engineering, Noushirvani

More information

Particle Swarm Optimization

Particle Swarm Optimization Dario Schor, M.Sc., EIT schor@ieee.org Space Systems Department Magellan Aerospace Winnipeg Winnipeg, Manitoba 1 of 34 Optimization Techniques Motivation Optimization: Where, min x F(x), subject to g(x)

More information

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION Kamil Zakwan Mohd Azmi, Zuwairie Ibrahim and Dwi Pebrianti Faculty of Electrical

More information

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm Oğuz Altun Department of Computer Engineering Yildiz Technical University Istanbul, Turkey oaltun@yildiz.edu.tr

More information

Parallel Neural Network Training with OpenCL

Parallel Neural Network Training with OpenCL Parallel Neural Network Training with OpenCL Nenad Krpan, Domagoj Jakobović Faculty of Electrical Engineering and Computing Unska 3, Zagreb, Croatia Email: nenadkrpan@gmail.com, domagoj.jakobovic@fer.hr

More information

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems Dervis Karaboga and Bahriye Basturk Erciyes University, Engineering Faculty, The Department of Computer

More information

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding e Scientific World Journal, Article ID 746260, 8 pages http://dx.doi.org/10.1155/2014/746260 Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding Ming-Yi

More information

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms Somayyeh Nalan-Ahmadabad and Sehraneh Ghaemi Abstract In this paper, pole placement with integral

More information

Automatic differentiation based for particle swarm optimization steepest descent direction

Automatic differentiation based for particle swarm optimization steepest descent direction International Journal of Advances in Intelligent Informatics ISSN: 2442-6571 Vol 1, No 2, July 2015, pp. 90-97 90 Automatic differentiation based for particle swarm optimization steepest descent direction

More information

Data Distortion for Privacy Protection in a Terrorist Analysis System

Data Distortion for Privacy Protection in a Terrorist Analysis System Data Distortion for Privacy Protection in a Terrorist Analysis System Shuting Xu, Jun Zhang, Dianwei Han, and Jie Wang Department of Computer Science, University of Kentucky, Lexington KY 40506-0046, USA

More information

CT79 SOFT COMPUTING ALCCS-FEB 2014

CT79 SOFT COMPUTING ALCCS-FEB 2014 Q.1 a. Define Union, Intersection and complement operations of Fuzzy sets. For fuzzy sets A and B Figure Fuzzy sets A & B The union of two fuzzy sets A and B is a fuzzy set C, written as C=AUB or C=A OR

More information

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Richa Agnihotri #1, Dr. Shikha Agrawal #1, Dr. Rajeev Pandey #1 # Department of Computer Science Engineering, UIT,

More information

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques Nasser Sadati Abstract Particle Swarm Optimization (PSO) algorithms recently invented as intelligent optimizers with several highly

More information

A Firework Algorithm for Solving Capacitated Vehicle Routing Problem

A Firework Algorithm for Solving Capacitated Vehicle Routing Problem A Firework Algorithm for Solving Capacitated Vehicle Routing Problem 1 Noora Hani Abdulmajeed and 2* Masri Ayob 1,2 Data Mining and Optimization Research Group, Center for Artificial Intelligence, Faculty

More information

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD Figen Öztoprak, Ş.İlker Birbil Sabancı University Istanbul, Turkey figen@su.sabanciuniv.edu, sibirbil@sabanciuniv.edu

More information

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation: Convergence of PSO The velocity update equation: v i = v i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) for some values of φ 1 and φ 2 the velocity grows without bound can bound velocity to range [ V max,v

More information

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization Cell-to-switch assignment in cellular networks using barebones particle swarm optimization Sotirios K. Goudos a), Konstantinos B. Baltzis, Christos Bachtsevanidis, and John N. Sahalos RadioCommunications

More information

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Minzhong Liu, Xiufen Zou, Yu Chen, Zhijian Wu Abstract In this paper, the DMOEA-DD, which is an improvement of DMOEA[1,

More information

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Ensemble methods in machine learning. Example. Neural networks. Neural networks Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you

More information

Visual object classification by sparse convolutional neural networks

Visual object classification by sparse convolutional neural networks Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.

More information

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE Fang Wang, and Yuhui Qiu Intelligent Software and Software Engineering Laboratory, Southwest-China Normal University,

More information

A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

A Comparative Study of Genetic Algorithm and Particle Swarm Optimization IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727 PP 18-22 www.iosrjournals.org A Comparative Study of Genetic Algorithm and Particle Swarm Optimization Mrs.D.Shona 1,

More information

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECHNOLOGY METAHEURISTICS

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECHNOLOGY METAHEURISTICS METAHEURISTICS 1. Objectives The goals of the laboratory workshop are as follows: to learn basic properties of evolutionary computation techniques and other metaheuristics for solving various global optimization

More information

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat An Improved Firefly Algorithm for Optimization Problems Amarita Ritthipakdee 1, Arit Thammano, Nol Premasathian 3, and Bunyarit Uyyanonvara 4 Abstract Optimization problem is one of the most difficult

More information

Surrogate-assisted Self-accelerated Particle Swarm Optimization

Surrogate-assisted Self-accelerated Particle Swarm Optimization Surrogate-assisted Self-accelerated Particle Swarm Optimization Kambiz Haji Hajikolaei 1, Amir Safari, G. Gary Wang ±, Hirpa G. Lemu, ± School of Mechatronic Systems Engineering, Simon Fraser University,

More information

Assessing Particle Swarm Optimizers Using Network Science Metrics

Assessing Particle Swarm Optimizers Using Network Science Metrics Assessing Particle Swarm Optimizers Using Network Science Metrics Marcos A. C. Oliveira-Júnior, Carmelo J. A. Bastos-Filho and Ronaldo Menezes Abstract Particle Swarm Optimizers (PSOs) have been widely

More information

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION International Journal of Computer Engineering and Applications, Volume VIII, Issue I, Part I, October 14 QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION Shradha Chawla 1, Vivek Panwar 2 1 Department

More information

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving

More information

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Optimisation problems Optimisation & search Two Examples The knapsack problem

More information

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY AUTOMATIC TEST CASE GENERATION FOR PERFORMANCE ENHANCEMENT OF SOFTWARE THROUGH GENETIC ALGORITHM AND RANDOM TESTING Bright Keswani,

More information

A Genetic Approach for Solving Minimum Routing Cost Spanning Tree Problem

A Genetic Approach for Solving Minimum Routing Cost Spanning Tree Problem A Genetic Approach for Solving Minimum Routing Cost Spanning Tree Problem Quoc Phan Tan Abstract Minimum Routing Cost Spanning Tree (MRCT) is one of spanning tree optimization problems having several applications

More information

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016 CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2016 A2/Midterm: Admin Grades/solutions will be posted after class. Assignment 4: Posted, due November 14. Extra office hours:

More information

A Hyper-heuristic based on Random Gradient, Greedy and Dominance

A Hyper-heuristic based on Random Gradient, Greedy and Dominance A Hyper-heuristic based on Random Gradient, Greedy and Dominance Ender Özcan and Ahmed Kheiri University of Nottingham, School of Computer Science Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB, UK

More information

Genetic-PSO Fuzzy Data Mining With Divide and Conquer Strategy

Genetic-PSO Fuzzy Data Mining With Divide and Conquer Strategy Genetic-PSO Fuzzy Data Mining With Divide and Conquer Strategy Amin Jourabloo Department of Computer Engineering, Sharif University of Technology, Tehran, Iran E-mail: jourabloo@ce.sharif.edu Abstract

More information

Fast Efficient Clustering Algorithm for Balanced Data

Fast Efficient Clustering Algorithm for Balanced Data Vol. 5, No. 6, 214 Fast Efficient Clustering Algorithm for Balanced Data Adel A. Sewisy Faculty of Computer and Information, Assiut University M. H. Marghny Faculty of Computer and Information, Assiut

More information

Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms

Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 5, SEPTEMBER 2002 1225 Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms S. Sathiya Keerthi Abstract This paper

More information

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm Acta Technica 61, No. 4A/2016, 189 200 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm Jianrong Bu 1, Junyan

More information

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-&3 -(' ( +-   % '.+ % ' -0(+$, The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure

More information

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION Manjeet Singh 1, Divesh Thareja 2 1 Department of Electrical and Electronics Engineering, Assistant Professor, HCTM Technical

More information

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS NABEEL AL-MILLI Financial and Business Administration and Computer Science Department Zarqa University College Al-Balqa'

More information

Effect of the PSO Topologies on the Performance of the PSO-ELM

Effect of the PSO Topologies on the Performance of the PSO-ELM 2012 Brazilian Symposium on Neural Networks Effect of the PSO Topologies on the Performance of the PSO-ELM Elliackin M. N. Figueiredo and Teresa B. Ludermir Center of Informatics Federal University of

More information

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM Journal of Al-Nahrain University Vol.10(2), December, 2007, pp.172-177 Science GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM * Azhar W. Hammad, ** Dr. Ban N. Thannoon Al-Nahrain

More information

Designing of Optimized Combinational Circuits Using Particle Swarm Optimization Algorithm

Designing of Optimized Combinational Circuits Using Particle Swarm Optimization Algorithm Advances in Computational Sciences and Technology ISSN 0973-6107 Volume 10, Number 8 (2017) pp. 2395-2410 Research India Publications http://www.ripublication.com Designing of Optimized Combinational Circuits

More information

Particle Swarm Optimization applied to Pattern Recognition

Particle Swarm Optimization applied to Pattern Recognition Particle Swarm Optimization applied to Pattern Recognition by Abel Mengistu Advisor: Dr. Raheel Ahmad CS Senior Research 2011 Manchester College May, 2011-1 - Table of Contents Introduction... - 3 - Objectives...

More information

Particle Swarm Optimization Based Learning Method for Process Neural Networks

Particle Swarm Optimization Based Learning Method for Process Neural Networks Particle Swarm Optimization Based Learning Method for Process Neural Networks Kun Liu, Ying Tan, and Xingui He Key Laboratory of Machine Perception, Ministry of Education, Peking University, Beijing 100871,

More information

Information Retrieval Through Various Approximate Matrix Decompositions

Information Retrieval Through Various Approximate Matrix Decompositions Information Retrieval Through Various Approximate Matrix Decompositions Kathryn Linehan, klinehan@math.umd.edu Advisor: Dr. Dianne O Leary, oleary@cs.umd.edu Abstract Information retrieval is extracting

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

A Software Testing Optimization Method Based on Negative Association Analysis Lin Wan 1, Qiuling Fan 1,Qinzhao Wang 2

A Software Testing Optimization Method Based on Negative Association Analysis Lin Wan 1, Qiuling Fan 1,Qinzhao Wang 2 International Conference on Automation, Mechanical Control and Computational Engineering (AMCCE 2015) A Software Testing Optimization Method Based on Negative Association Analysis Lin Wan 1, Qiuling Fan

More information

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 131 CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 6.1 INTRODUCTION The Orthogonal arrays are helpful in guiding the heuristic algorithms to obtain a good solution when applied to NP-hard problems. This

More information

Image Denoising via Group Sparse Eigenvectors of Graph Laplacian

Image Denoising via Group Sparse Eigenvectors of Graph Laplacian Image Denoising via Group Sparse Eigenvectors of Graph Laplacian Yibin Tang, Ying Chen, Ning Xu, Aimin Jiang, Lin Zhou College of IOT Engineering, Hohai University, Changzhou, China School of Information

More information

Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization

Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization 488 International Journal Wu-Chang of Control, Wu Automation, and Men-Shen and Systems, Tsai vol. 6, no. 4, pp. 488-494, August 2008 Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization

More information

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision report University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision Web Server master database User Interface Images + labels image feature algorithm Extract

More information

A HYBRID ALGORITHM BASED ON PARTICLE SWARM OPTIMIZATION

A HYBRID ALGORITHM BASED ON PARTICLE SWARM OPTIMIZATION INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 1, Number 3-4, Pages 275-282 2005 Institute for Scientific Computing and Information A HYBRID ALGORITHM BASED ON PARTICLE SWARM OPTIMIZATION

More information

Network Routing Protocol using Genetic Algorithms

Network Routing Protocol using Genetic Algorithms International Journal of Electrical & Computer Sciences IJECS-IJENS Vol:0 No:02 40 Network Routing Protocol using Genetic Algorithms Gihan Nagib and Wahied G. Ali Abstract This paper aims to develop a

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

Particle Swarm Optimization

Particle Swarm Optimization Particle Swarm Optimization Gonçalo Pereira INESC-ID and Instituto Superior Técnico Porto Salvo, Portugal gpereira@gaips.inesc-id.pt April 15, 2011 1 What is it? Particle Swarm Optimization is an algorithm

More information

Modified Particle Swarm Optimization

Modified Particle Swarm Optimization Modified Particle Swarm Optimization Swati Agrawal 1, R.P. Shimpi 2 1 Aerospace Engineering Department, IIT Bombay, Mumbai, India, swati.agrawal@iitb.ac.in 2 Aerospace Engineering Department, IIT Bombay,

More information

Exploration vs. Exploitation in Differential Evolution

Exploration vs. Exploitation in Differential Evolution Exploration vs. Exploitation in Differential Evolution Ângela A. R. Sá 1, Adriano O. Andrade 1, Alcimar B. Soares 1 and Slawomir J. Nasuto 2 Abstract. Differential Evolution (DE) is a tool for efficient

More information

Simplicial Global Optimization

Simplicial Global Optimization Simplicial Global Optimization Julius Žilinskas Vilnius University, Lithuania September, 7 http://web.vu.lt/mii/j.zilinskas Global optimization Find f = min x A f (x) and x A, f (x ) = f, where A R n.

More information

AN OPTIMIZATION GENETIC ALGORITHM FOR IMAGE DATABASES IN AGRICULTURE

AN OPTIMIZATION GENETIC ALGORITHM FOR IMAGE DATABASES IN AGRICULTURE AN OPTIMIZATION GENETIC ALGORITHM FOR IMAGE DATABASES IN AGRICULTURE Changwu Zhu 1, Guanxiang Yan 2, Zhi Liu 3, Li Gao 1,* 1 Department of Computer Science, Hua Zhong Normal University, Wuhan 430079, China

More information

A Combinatorial Algorithm for The Cardinality Constrained Portfolio Optimization Problem

A Combinatorial Algorithm for The Cardinality Constrained Portfolio Optimization Problem 0 IEEE Congress on Evolutionary Computation (CEC) July -, 0, Beijing, China A Combinatorial Algorithm for The Cardinality Constrained Portfolio Optimization Problem Tianxiang Cui, Shi Cheng, and Ruibin

More information

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro

More information

Minoru SASAKI and Kenji KITA. Department of Information Science & Intelligent Systems. Faculty of Engineering, Tokushima University

Minoru SASAKI and Kenji KITA. Department of Information Science & Intelligent Systems. Faculty of Engineering, Tokushima University Information Retrieval System Using Concept Projection Based on PDDP algorithm Minoru SASAKI and Kenji KITA Department of Information Science & Intelligent Systems Faculty of Engineering, Tokushima University

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

PARTICLE SWARM OPTIMIZATION (PSO)

PARTICLE SWARM OPTIMIZATION (PSO) PARTICLE SWARM OPTIMIZATION (PSO) J. Kennedy and R. Eberhart, Particle Swarm Optimization. Proceedings of the Fourth IEEE Int. Conference on Neural Networks, 1995. A population based optimization technique

More information

A Classifier with the Function-based Decision Tree

A Classifier with the Function-based Decision Tree A Classifier with the Function-based Decision Tree Been-Chian Chien and Jung-Yi Lin Institute of Information Engineering I-Shou University, Kaohsiung 84008, Taiwan, R.O.C E-mail: cbc@isu.edu.tw, m893310m@isu.edu.tw

More information

CHAPTER-5 APPLICATION OF SYMBIOTIC ORGANISMS SEARCH ALGORITHM

CHAPTER-5 APPLICATION OF SYMBIOTIC ORGANISMS SEARCH ALGORITHM 100 CHAPTER-5 APPLICATION OF SYMBIOTIC ORGANISMS SEARCH ALGORITHM 5.1 INTRODUCTION The progressive increase in electrical demand has been the cause for technical problems to the utility companies such

More information

A Naïve Soft Computing based Approach for Gene Expression Data Analysis

A Naïve Soft Computing based Approach for Gene Expression Data Analysis Available online at www.sciencedirect.com Procedia Engineering 38 (2012 ) 2124 2128 International Conference on Modeling Optimization and Computing (ICMOC-2012) A Naïve Soft Computing based Approach for

More information

AN IMPROVED ITERATIVE METHOD FOR SOLVING GENERAL SYSTEM OF EQUATIONS VIA GENETIC ALGORITHMS

AN IMPROVED ITERATIVE METHOD FOR SOLVING GENERAL SYSTEM OF EQUATIONS VIA GENETIC ALGORITHMS AN IMPROVED ITERATIVE METHOD FOR SOLVING GENERAL SYSTEM OF EQUATIONS VIA GENETIC ALGORITHMS Seyed Abolfazl Shahzadehfazeli 1, Zainab Haji Abootorabi,3 1 Parallel Processing Laboratory, Yazd University,

More information

Chapter 1 A New Parallel Algorithm for Computing the Singular Value Decomposition

Chapter 1 A New Parallel Algorithm for Computing the Singular Value Decomposition Chapter 1 A New Parallel Algorithm for Computing the Singular Value Decomposition Nicholas J. Higham Pythagoras Papadimitriou Abstract A new method is described for computing the singular value decomposition

More information

A Native Approach to Cell to Switch Assignment Using Firefly Algorithm

A Native Approach to Cell to Switch Assignment Using Firefly Algorithm International Journal of Engineering Inventions ISSN: 2278-7461, www.ijeijournal.com Volume 1, Issue 2(September 2012) PP: 17-22 A Native Approach to Cell to Switch Assignment Using Firefly Algorithm Apoorva

More information

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics 1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!

More information

Collaborative Filtering for Netflix

Collaborative Filtering for Netflix Collaborative Filtering for Netflix Michael Percy Dec 10, 2009 Abstract The Netflix movie-recommendation problem was investigated and the incremental Singular Value Decomposition (SVD) algorithm was implemented

More information

Luo, W., and Li, Y. (2016) Benchmarking Heuristic Search and Optimisation Algorithms in Matlab. In: 22nd International Conference on Automation and Computing (ICAC), 2016, University of Essex, Colchester,

More information

International Journal of Current Research and Modern Education (IJCRME) ISSN (Online): & Impact Factor: Special Issue, NCFTCCPS -

International Journal of Current Research and Modern Education (IJCRME) ISSN (Online): & Impact Factor: Special Issue, NCFTCCPS - TO SOLVE ECONOMIC DISPATCH PROBLEM USING SFLA P. Sowmya* & Dr. S. P. Umayal** * PG Scholar, Department Electrical and Electronics Engineering, Muthayammal Engineering College, Rasipuram, Tamilnadu ** Dean

More information

Constraints in Particle Swarm Optimization of Hidden Markov Models

Constraints in Particle Swarm Optimization of Hidden Markov Models Constraints in Particle Swarm Optimization of Hidden Markov Models Martin Macaš, Daniel Novák, and Lenka Lhotská Czech Technical University, Faculty of Electrical Engineering, Dep. of Cybernetics, Prague,

More information

Unidimensional Search for solving continuous high-dimensional optimization problems

Unidimensional Search for solving continuous high-dimensional optimization problems 2009 Ninth International Conference on Intelligent Systems Design and Applications Unidimensional Search for solving continuous high-dimensional optimization problems Vincent Gardeux, Rachid Chelouah,

More information

A Memetic Heuristic for the Co-clustering Problem

A Memetic Heuristic for the Co-clustering Problem A Memetic Heuristic for the Co-clustering Problem Mohammad Khoshneshin 1, Mahtab Ghazizadeh 2, W. Nick Street 1, and Jeffrey W. Ohlmann 1 1 The University of Iowa, Iowa City IA 52242, USA {mohammad-khoshneshin,nick-street,jeffrey-ohlmann}@uiowa.edu

More information