A Population-Based Learning Algorithm Which Learns Both. Architectures and Weights of Neural Networks y. Yong Liu and Xin Yao
|
|
- Aubrey Cain
- 5 years ago
- Views:
Transcription
1 A Population-Based Learning Algorithm Which Learns Both Architectures and Weights of Neural Networks y Yong Liu and Xin Yao Computational Intelligence Group Department of Computer Science University College, The University of New South Wales Australian Defence Force Academy, Canberra, ACT, Australia xin@csadfa.cs.adfa.oz.au Abstract One of the major issues in the eld of articial neural networks (ANNs) is the design of their architectures. There are strong biological and engineering evidences to support that the information processing capability of an ANN is determined by its architecture. This paper proposes a new population-based learning algorithm (PBLA) which learns both ANN's architecture and weights. The evolutionary approach is used to evolve a population of ANNs. Unlike other evolutionary approaches to ANN learning, each ANN (i.e., individual) in the population is evaluated by partial training rather than complete training. Substantial savings in computational cost can be achieved by such progressive partial training. This training process can change both ANN's architecture and weights. Our preliminary experiments have demonstrated the eectiveness of our algorithm. 1 Introduction One of the major issues in the eld of ANNs is the design of their architectures. There are strong biological and engineering evidences to support that the information processing capability of an ANN is determined by its architecture. Given a learning task, if the network is too small, it will not be capable of forming a good model of the problem. On the other hand, if the network is too big then the ANN may overt the training data and have a very poor generalisation ability. With little or no prior knowledge of the problem, one usually determines the architecture by trial-and-error. There is no systematic way to design a near optimal architecture automatically for a given task. Research on constructive and destructive algorithms is an eort made towards the automatic design of architectures. Roughly speaking, a constructive algorithm starts with the smallest possible network and gradually increases the size until the performance begins to level o while a destructive algorithm does the opposite, i.e. starts with the maximal network and deletes unnecessary layers, nodes and connections during training. Design of the optimal architecture for an ANN can be formulated as a search problem in the architecture space where each point represents an architecture. Given some performance criteria This work is supported by the Australian Research Council through its small grant scheme and by a University College Special Research Grant. y Published in Proc. of ICYCS'95 Workshop on Soft Computing, ed. X. Yao and X. Li, pp.29{38, July To appear in Chinese Journal of Advanced Software Research (Allerton Press, Inc., New York, NY 10011), Vol. 3, No. 1,
2 about architectures, the performance level of all architectures forms a surface in the space. The optimal architecture design is equivalent to nding the highest point on this surface. Because the surface is innitely large, nondierentiable, complex, deceptive and multimodal, these characteristics make evolutionary algorithms a better candidate for searching the surface than those constructive and destructive algorithms mentioned above. Because of advantages of the evolutionary design of architectures, a lot of research has been carried out in recent years [1, 2]. In the evolution of architectures, each architecture is evaluated through back-propagation (BP) training. This process is often very time consuming and sensitive to initial conditions of the training. Such evaluation of the architecture is also extremely noisy [2]. Furthermore, if the measure of tness is the sum of squared errors on the training set, this method may generate networks that over learn the training data. One way to avoid this is to add a complexity term, e.g. the number of connections in the architecture to the tness function. However, this penalty term method tends to generate ANNs that are not able to learn. In the extreme case, a network might try to gain rewards by pruning o all of its connections. In order to solve the above problems, this paper proposes a new population-based learning algorithm which learns both ANN's architecture and weights. The evolutionary approach is used to evolve a population of ANNs. Each individual of the population is evaluated by partial training using the modied BP. Because network architectures in the population are dierent from each other, it is not suitable to keep learning rate xed for all individuals. In PBLA, we modied the classical BP by means of dynamically adapting during training for each member in the population. When a parent network is selected for breeding from the population, PBLA rst tests its performance to determine whether to continue training or to mutate the architecture. If the parent network is promising, PBLA continues training using the modied BP. Otherwise, PBLA switches to simulated annealing (SA) from the modied BP. If SA still cannot make the network escape from the local minimum, PBLA mutates the architecture of the network to generate a new architecture. To speed up network optimization, we apply the nonconvergent method [3] to guide mutation. In Section 2, we describe PBLA at the population level and individual level. Section 3 reports experiment results with PBLA on a number of parity problems. Finally, some conclusions are given in Section 4. 2 A Population-Based Learning Algorithm 2.1 Network Architecture In the published literature on ANNs, a large number of structures have been considered and studied. These can be categorised into two broad classes: feedforward neural networks and recurrent networks. Here, a class of feedforward neural networks called generalised multilayer perceptrons are considered. The architecture of such a network is shown in Figure 1, X and Y are inputs and outputs respectively. We assume the following: net i = x i = X i ; 1 i m (1) Xi?1 j=1 w ij x j ; m < i N + n (2) x j = f(net j ); m < j N + n (3) Y i = x i+n ; 1 i n (4) 2
3 where f is the following sigmoid function: f(z) = 1=(1 + e?z ) (5) m and n is the number of inputs and outputs respectively, N is a constant that can be any integer you choose as long as it is no less than m. The value of N + n decides how many nodes are in the network (if we include inputs as nodes). X Input X: m m+1 i-1 i N+1 N+n Y Output Figure 1: A generalised multilayer perceptron. In Figure 1, there are N + n circles, representing all of the nodes in the network, including the input nodes. The rst m circles are really just copies of the inputs X 1 ; : : :; X m. Every other node in the network such as node number i, which calculates net i and x i takes inputs from every node that precedes it in the network. Even the last output node, which generates Y n, takes input from other output nodes, such as the one which outputs Y n?1. In neural network terminology, this network is \fully connected" in the extreme. It is generally agreed that it is inadvisable for a generalised multilayer perceptron to be fully connected. In this context, we may therefore raise the following question: Given that a generalised multilayer perceptron should not be fully connected, how should the connections of the network be allocated? This question is of no major concern in the case of small-scale applications, but it is certainly crucial to the successful application of BP for solving large-scale, real-world problems. However, there is no systematic way to design a near optimal architecture automatically for a given task. Our present approach is to learn both architectures and weights of neural networks based on evolutionary algorithms. In PBLA, we choose the architecture and the weights w ij so as to minimize square error over the training set that contains T patterns: E = TX t=1 E(t) = 1 2 TX nx t=1 i=1 [Y i (t)? Z i (t)] 2 (6) where Y i (t) and Z i (t) are actual and desired outputs of node i for pattern t. 2.2 The Evolutionary Process The PBLA uses an evolutionary-programming-like algorithm to evolve a population of ANNs. This method works as follows: 3
4 Step 1 Randomly generate an initial population of M feedforward neural networks. The number of hidden nodes and initial connection density for each network in the population are chosen within certain ranges. The random initial weights are uniformly distributed inside a small range. Step 2 Partially train each network in the population for a certain number of epochs using the modied BP. The number of epochs is xed by a control parameter set by the user. The error E for each network is checked after partial training. If E has not been signicantly reduced, then the assumption is that E is trapped in a local minimum, mark the network with `failure'. Otherwise mark the network with `success'. Step 3 Rank the networks in the population according to their error values( from the best to the worst) Step 4 Use the rank-based selection to pick up one parent network from the population. If its mark is `success', go to Step 5. Otherwise go to Step 6. Step 5 Partially train the parent network to obtain an ospring network and mark it in the same way as Step 2. Insert this ospring into the population in the ranking replacing its parent network. Go back to Step 4. Step 6 Train the parent network with SA to obtain an ospring network. If SA reduces the error E of the parent network signicantly, mark the ospring network with `success' and insert it in the ranking replacing its parent network, then go back to Step 4. Otherwise discard this ospring and go to Step 7. Step 7 Delete hidden nodes. 1. Randomly delete hidden nodes from the parent network. 2. Partially train the deleted network to obtain an ospring network. If the ospring network is better than the worst network in the population, insert the former in the ranking and remove the latter, then go back to Step 4. Otherwise discard this ospring and go to Step 8. Step 8 Delete connections. 1. Calculate the approximate importance of each connection in the parent network using the nonconvergent method. Randomly delete the connections from the parent network according to the calculated importance. 2. Partially train the deleted network to obtain an ospring network and decide whether to accept or reject it in the same way as Step 7. If the ospring network is accepted, go back to Step 4. Otherwise discard this ospring and go to Step 9. Step 9 Add connections/nodes. 1. Calculate the approximate importance of each virtual connection with zero weight. Randomly add the connections to the parent network to obtain Ospring 1 according to the calculated importance. 2. Add new nodes to the parent network to obtain Ospring 2 through splitting existing nodes. 4
5 3. Partially train Ospring 1 and Ospring 2 then choose the better one as the survived ospring. Insert the survived ospring in the ranking and remove the worst network from the population. Step 10 Repeat Step 4 to Step 9 until an acceptable network has been found or until a certain number of generations has been reached. Evolutionary algorithms provide alternative approaches to the design of architecture. Such evolutionary approaches consist of two major stages. The rst stage is to decide the genotype representation scheme of architectures. The second stage is the evolution itself driven by evolutionary search procedures in which genetic operators have to be decided in conjunction with the representation scheme. The key issue is to decide how much information about an architecture should be encoded into the genotype representation. At one extreme, all the detail, i.e. every connection and node of an architecture can be specied by the genotype representation. This kind of representation schemes is called the direct encoding scheme. At the other extreme, only the most important parameters of an architecture such as the number of hidden layers and hidden nodes in each layer are encoded. Other detail about the architecture is left to the training process to decide. This kind of representation schemes is called the indirect encoding scheme. In the direct encoding scheme, each connection in an architecture is directly specied by its binary representation. For example, an N N matrix C = (c ij ) NN can represent an architecture with N nodes where c ij indicates presence or absence of the connection from node i to node j. We can use c ij = 1 to indicate a connection and c ij = 0 to indicate no connection. In fact, c ij can even be connection weights from node i to node j so that both the topological structure and connection weights of an ANN are evolved at the same time. Each such matrix has a direct one-to-one mapping to the corresponding architecture. Constraints on architectures being explored can easily be incorporated into such representation scheme by setting constraints on the matrix, e.g. a feedforward ANN will have nonzero entries only in the upper triangle of the matrix. Because the direct encoding scheme is relatively simple and straightforward to implement, we decided to use it to code our network architectures. However, the direct encoding scheme does not scale well since large architectures require very large matrices to represent. To implement this representation scheme eciently on a conventional computer, one would use a linked list to represent the connections actually implemented for each node. It is obvious that for sparse feedforward neural networks a lot of memory can be saved. In PBLA, each network in the population is evaluated by partial training. The tness is calculated by the sum of the squared error for the training set. Since the evaluation of the networks is very expensive, PBLA adopts rank-based selection mechanism to enhance selection pressure. It is demonstrated that selection pressure is a key factor in obtaining a near optimum. In PBLA, networks in the population are rst sorted in a non-descending order according to their tness. Let the M sorted networks be numbered as 0; 1; : : :; M? 1. Then the (M? j)th network is selected with probability j p(m? j) = P M (7) k=1 k The selected network is then manipulated by the following ve mutations: partial training, delete nodes and connections, and add connections and nodes. In order to avoid unnecessary training and premature convergence, we adopt the following replacement strategy. If the ospring is obtained through progressive partial training, the algorithm accepts it and removes its parent network. If the ospring is obtained through deleting connections or nodes, the algorithm accepts 5
6 it only when it is better than the worst network in the population. In such cases, the algorithm removes the worst network. If the ospring is obtained through adding connections or nodes, the algorithm always accepts it and removes the worst network in the population. 2.3 Partial Training and Evaluation BP is currently the most popular algorithm for the supervised training of ANNs. There have been some successful applications of BP algorithms in various areas. However, it is well-known that nding optimal weights using the classical BP is very slow. Numerous heuristic optimization algorithms have been proposed to improve the convergence speed of the classical BP. Although most of these have been somewhat successful, they usually introduced additional parameters which must be varied from one problem to another, and if not chosen properly can actually slow the rate of convergence. In the classical BP, the learning rate is kept xed throughout training. The learning rate is often very small in order to prevent oscillations and ensure convergence. However, a very small xed value for slows down the rate of convergence of BP. The use of a xed learning rate may not suit the evolutionary design of architecture. Because all individuals in the population are dierent from each other, a learning rate appropriate for one network is not necessarily appropriate for other networks in the population. Every network should have its own individual learning rate. Unfortunately, search for a good xed learning rate can itself become a hard problem. In PBLA, learning is accelerated through learning rate adaptation. The initial learning rate i,(i = 1; : : :; M) for all individuals in the initial population have the same values. Each individual adjusts its learning rate within a certain range during the evolutionary process according to a simple heuristic principle. During partial training, the error E is checked after every k epochs. If E decreases, the learning rate is increased. Otherwise, the learning rate is reduced. In the later case the new weights and error are discarded. Another drawback of BP is due to its gradient descent nature. BP often gets trapped in a local minimum of the error function and is very inecient in nding a global minimum if the error function is multimodal and nondierentiable. There are two ways for avoiding local minimum. One way is to mutate the architecture network. The other way is to adopt global optimization methods to train the network. It is worth pointing out that the capability of an ANN not only depends on the network architecture but also on the weights. When a network is trapped in a local minimum, it is not clear whether it is due to the weights or the inappropriate network architecture. In order to nd a smaller network, PBLA rst switches to SA from the modied BP in order to nd better weights. Only when SA fails to improve the error E, PBLA starts to mutate the network architecture. 2.4 Architecture Mutation An issue in the evolution of architectures is when and how the architectures should be mutated. In PBLA, when the hybrid algorithm that combines the modied BP and SA fails to improve the error E of the parent network, the algorithm starts to mutate its architecture. The mutation is divided into deletion phase and addition phase. The architecture is rst mutated by deleting hidden nodes or connections. If the new network is better than the worst network in the population then accept it. Otherwise, the algorithm adds connections or hidden nodes to the network and then chooses the better one to survive. The selection of which node to remove or split is uniform over the collection of hidden nodes. The deletion of a node involves the complete removal of the node and and all its incident connections. In order to preserve the knowledge achieved by the parent network, hidden 6
7 Table 1: The parameter set used in PBLA experiments Population size 20 Number of epochs for each partial training 100 Initial number of hidden nodes 2-N Number of mutated hidden nodes 1-2 Initial connection density 0.75 Number of mutated connections 1-3 Initial learning rate 0.5 Number of temperatures in SA 5 Range of learning rate Number of iterations at each temperature 100 nodes are added to the parent network through splitting existing nodes. Two nodes obtained by splitting an existing node i have the same connections as the existing node. The weights of these new nodes have the following values: w 1 ij = w 2 ij = w ij ; j < i (8) w 1 ki = (1 + )w ki ; k > i (9) w 2 ki =?w ki; k > i (10) where w is the weight vector of the existing node i, w 1 and w 2 are respectively the weight vectors of the new nodes, and is the mutation factor which may take either the xed or random values. Addition or deletion of a connection depends on the importance of the connection in the network. The simplest approach is to delete the smallest weight in the network. This however is not always the best approach since the solution could be quite sensitive to the weight. The nonconvergent method measures the importance of weights by the nal weight test variables based on signicance tests for deviations from zero in the weight update process [3]. Denoting the weight update w ij (w) =?[@L t =@w ij ], by the local gradient of the linear error function L (L = P T P n jy t=1 i=1 i(t)? Z i (t)j) with respect to example t and weight w ji, the signicance of the deviation of w ij from zero is dened by the test variable test(w ij ) = P T t=1 t ij q PT t=1( t ij? ij) (11) where t = ij w ij + w t (w), ij ij denotes the average over the set t ij, t = 1; : : :; T. A large value of test variable test(w ij ) indicates high importance of the connection with weight w ij. The advantage of the nonconvergent method is that this method does not require the training process to converge, so we can use this method to measure the relevance of connections during the evolutionary process. At the same time, since these variables can be calculated for weights that have already been set to zero, they can also be used to determine which connection should be added to the network. 3 Experiments In order to test the eciency of PBLA, we applied PBLA to N-bit parity problem with N ranging from 4 to 8. The parity problem is a very dicult problem because the most similar patterns (those which dier by a single bit) require dierent answers. In the N-bit parity problem, the output required is 1 if the input pattern contains an odd number of 1s and 0 otherwise. All 2 N patterns were used in training. PBLA was run with the parameters shown in Table 1. In solving the N-bit parity problem for N =4 to 8, the Cascade-Correlation algorithm requires (2, 2-3, 3, 4-5, 5-6) hidden nodes respectively [4]; the Perceptron Cascade algorithm requires (2, 7
8 Table 2: Summary of results obtained with use of PBLA Problem Instance Parity-4 Parity-5 Parity-6 Parity-7 Parity-8 Number of Min hidden nodes Max Mean SD Number of Min connections Max Mean SD Number of Min epochs Max Mean SD Error of Min 8:3 10?6 1:1 10?2 1:5 10?3 4:2 10?4 3:9 10?4 etworks Max 1:4 10?3 5:0 10?2 6:1 10?2 3:2 10?2 2:1 10?2 Mean 5:0 10?4 1:4 10?2 1:2 10?2 8:9 10?3 5:2 10?3 SD 3:5 10?3 1:6 10?2 1:8 10?2 9:5 10?3 7:1 10?3 Table 3: Parameters for the network of Figure 2 T ?42:8?32:4 0?5:6?23:6?33:6 9?75:3?32:0 43.2?41:1?34:5?34:8 10?85:0?28:1 28.6?28:0?28:0?28: ?13: ?33: ?34:8 39.8?58: ?28:2 29.3?47:6?41: ?13: , 3, 3, 4) hidden nodes respectively [5]; the tower algorithm requires N=2 hidden nodes [6]. The rst algorithm uses Gaussian hidden nodes; the last two algorithms use linear threshold nodes. All networks constructed by the above algorithms use short cut connections. Using a single hidden layer, FNNCA can construct neural networks having (3, 4, 5, 5) hidden nodes respectively that solve this problem for N=4 to 7 [7]. Based on ten runs of PBLA for each value of N, the average of the best network obtained are summarized in Table 2, where \number of epochs" indicates the total learning epochs taken by PBLA when the best network is obtained. Figure 2 shows an optimum network obtained by PBLA for the 7-bit parity problem. Figure 3 is amazing as PBLA can solve the 8-bit parity problem with a network having only 3 hidden nodes. The parameters of the networks of Figure 2-3 are given in Table 3-4, where \T" indicates the thresholds of hidden nodes and output nodes. It is clear that PBLA is superior to the existing constructive algorithms in terms of size of networks. PBLA not only yields the appropriate architectures, but also can generate the optimal architectures. 8
9 Figure 2: An optimum network for the 7-bit parity problem Figure 3: An optimum network for the 8-bit parity problem 9
10 Table 4: Parameters for the network of Figure 3 T ?12: ?29:4?28:9?29:7 10?40: ?18:1?19:1?18:5 11?48: ?15:9?16:3?15: ?10:0?11: ?25:4?28: ?17:3?18:8 20.4?67: ?15:9?15:8 16.7?55:0?26: ?11: Conclusion A population-based learning algorithm is proposed to generate a near optimal feedforward neural networks dynamically for the tasks at hand. Unlike other evolutionary approaches to ANNs learning, each ANN in the population is evaluated by partial training rather than complete training. This training process can change both ANN's architectures and weights. Our preliminary experiments have demonstrated the eectiveness of our algorithm. Global search procedures such as evolutionary algorithms are usually computationally expensive to run. It is however benecial to introduce global search in the design of ANNs especially when there is little prior knowledge available and performance of ANNs is required to be high because the trial-and-error and other heuristic methods are very inecient in such circumstances. There have already been some experiments which demonstrate the advantages of hybrid global/local search but the issue of an optimal combination of dierent search procedures needs further investigation. With the increasing power of parallel computers, the evolutionary design of large ANNs becomes feasible. The evolutionary process oers a new way to discover possible new ANN architectures. References [1] J. D. Schaer, D. Whitley, and L. J. Eshelman. Combinations of genetic algorithms and neural networks: a survey of the state of the art. In D. Whitley and J. D. Schaer, editors, Proceedings of the International Workshop on Combinations of Genetic Algorithms and Neural Networks COGANN-92), pp IEEE Computer Society Press, Los Alamitos, CA, [2] X. Yao. Evolutionary articial neural networks. International Journal of Neural Systems, 4(3): , [3] W. Finno, F. Hergert, and H. G. Zimmermann. Improving model selection by nonconvergent methods. Neural Networks, 6: , [4] S. E. Fahlman and C. Lebiere, The cascade-correlation learning architecture. In Advances in Neural Information Processing Systems 2, ed. D. S. Touretzky (Morgan Kaufmann, San Mateo, CA, 1990) pp [5] N. Burgess. A constructive algorithm that converges for real-valued input patterns. International Journal of Neural Systems, 5(1):59-66,
11 [6] J.-P. Nadal. Study of a growth algorithm for a feedforward network. International Journal of Neural Systems, 1:55-59, [7] R. Setiono and L. C. K. Hui. Use of a quasi-newton method in a feedforward neural network construction algorithm. IEEE Trans on Neural Networks, 6(1): ,
Hyperplane Ranking in. Simple Genetic Algorithms. D. Whitley, K. Mathias, and L. Pyeatt. Department of Computer Science. Colorado State University
Hyperplane Ranking in Simple Genetic Algorithms D. Whitley, K. Mathias, and L. yeatt Department of Computer Science Colorado State University Fort Collins, Colorado 8523 USA whitley,mathiask,pyeatt@cs.colostate.edu
More informationMeta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization
2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic
More informationCluster quality 15. Running time 0.7. Distance between estimated and true means Running time [s]
Fast, single-pass K-means algorithms Fredrik Farnstrom Computer Science and Engineering Lund Institute of Technology, Sweden arnstrom@ucsd.edu James Lewis Computer Science and Engineering University of
More informationNeural Network Weight Selection Using Genetic Algorithms
Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks
More informationTelecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA
A Decoder-based Evolutionary Algorithm for Constrained Parameter Optimization Problems S lawomir Kozie l 1 and Zbigniew Michalewicz 2 1 Department of Electronics, 2 Department of Computer Science, Telecommunication
More informationDepartment of. Computer Science. Remapping Subpartitions of. Hyperspace Using Iterative. Genetic Search. Keith Mathias and Darrell Whitley
Department of Computer Science Remapping Subpartitions of Hyperspace Using Iterative Genetic Search Keith Mathias and Darrell Whitley Technical Report CS-4-11 January 7, 14 Colorado State University Remapping
More informationObject classes. recall (%)
Using Genetic Algorithms to Improve the Accuracy of Object Detection Victor Ciesielski and Mengjie Zhang Department of Computer Science, Royal Melbourne Institute of Technology GPO Box 2476V, Melbourne
More informationArgha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.
Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial
More informationMulti-Objective Optimization Using Genetic Algorithms
Multi-Objective Optimization Using Genetic Algorithms Mikhail Gaerlan Computational Physics PH 4433 December 8, 2015 1 Optimization Optimization is a general term for a type of numerical problem that involves
More informationAPPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES
APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES A. Likas, K. Blekas and A. Stafylopatis National Technical University of Athens Department
More informationThis leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section
An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,
More informationReinforcement Control via Heuristic Dynamic Programming. K. Wendy Tang and Govardhan Srikant. and
Reinforcement Control via Heuristic Dynamic Programming K. Wendy Tang and Govardhan Srikant wtang@ee.sunysb.edu and gsrikant@ee.sunysb.edu Department of Electrical Engineering SUNY at Stony Brook, Stony
More information336 THE STATISTICAL SOFTWARE NEWSLETTER where z is one (randomly taken) pole of the simplex S, g the centroid of the remaining d poles of the simplex
THE STATISTICAL SOFTWARE NEWSLETTER 335 Simple Evolutionary Heuristics for Global Optimization Josef Tvrdk and Ivan Krivy University of Ostrava, Brafova 7, 701 03 Ostrava, Czech Republic Phone: +420.69.6160
More informationA B. A: sigmoid B: EBA (x0=0.03) C: EBA (x0=0.05) U
Extending the Power and Capacity of Constraint Satisfaction Networks nchuan Zeng and Tony R. Martinez Computer Science Department, Brigham Young University, Provo, Utah 8460 Email: zengx@axon.cs.byu.edu,
More informationDesigning Application-Specic Neural. Networks using the Structured Genetic. Dipankar Dasgupta and Douglas R. McGregor. Department of Computer Science
Designing Application-Specic Neural Networks using the Structured Genetic Algorithm. Dipankar Dasgupta and Douglas R. McGregor. dasgupta@cs.strath.ac.uk and douglas@cs.strath.ac.uk Department of Computer
More information1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra
Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation
More informationArtificial Neural Network based Curve Prediction
Artificial Neural Network based Curve Prediction LECTURE COURSE: AUSGEWÄHLTE OPTIMIERUNGSVERFAHREN FÜR INGENIEURE SUPERVISOR: PROF. CHRISTIAN HAFNER STUDENTS: ANTHONY HSIAO, MICHAEL BOESCH Abstract We
More informationV.Petridis, S. Kazarlis and A. Papaikonomou
Proceedings of IJCNN 93, p.p. 276-279, Oct. 993, Nagoya, Japan. A GENETIC ALGORITHM FOR TRAINING RECURRENT NEURAL NETWORKS V.Petridis, S. Kazarlis and A. Papaikonomou Dept. of Electrical Eng. Faculty of
More informationGENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM
Journal of Al-Nahrain University Vol.10(2), December, 2007, pp.172-177 Science GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM * Azhar W. Hammad, ** Dr. Ban N. Thannoon Al-Nahrain
More informationInternational Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)
Performance Analysis of GA and PSO over Economic Load Dispatch Problem Sakshi Rajpoot sakshirajpoot1988@gmail.com Dr. Sandeep Bhongade sandeepbhongade@rediffmail.com Abstract Economic Load dispatch problem
More informationAero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li
International Conference on Applied Science and Engineering Innovation (ASEI 215) Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm Yinling Wang, Huacong Li School of Power and
More informationTheoretical Foundations of SBSE. Xin Yao CERCIA, School of Computer Science University of Birmingham
Theoretical Foundations of SBSE Xin Yao CERCIA, School of Computer Science University of Birmingham Some Theoretical Foundations of SBSE Xin Yao and Many Others CERCIA, School of Computer Science University
More informationFrontier Pareto-optimum
Distributed Genetic Algorithms with a New Sharing Approach in Multiobjective Optimization Problems Tomoyuki HIROYASU Mitsunori MIKI Sinya WATANABE Doshisha University, Dept. of Knowledge Engineering and
More informationCenter for Automation and Autonomous Complex Systems. Computer Science Department, Tulane University. New Orleans, LA June 5, 1991.
Two-phase Backpropagation George M. Georgiou Cris Koutsougeras Center for Automation and Autonomous Complex Systems Computer Science Department, Tulane University New Orleans, LA 70118 June 5, 1991 Abstract
More informationREAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN
REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty
More informationConstructively Learning a Near-Minimal Neural Network Architecture
Constructively Learning a Near-Minimal Neural Network Architecture Justin Fletcher and Zoran ObradoviC Abetract- Rather than iteratively manually examining a variety of pre-specified architectures, a constructive
More informationObject Modeling from Multiple Images Using Genetic Algorithms. Hideo SAITO and Masayuki MORI. Department of Electrical Engineering, Keio University
Object Modeling from Multiple Images Using Genetic Algorithms Hideo SAITO and Masayuki MORI Department of Electrical Engineering, Keio University E-mail: saito@ozawa.elec.keio.ac.jp Abstract This paper
More informationClassifier C-Net. 2D Projected Images of 3D Objects. 2D Projected Images of 3D Objects. Model I. Model II
Advances in Neural Information Processing Systems 7. (99) The MIT Press, Cambridge, MA. pp.949-96 Unsupervised Classication of 3D Objects from D Views Satoshi Suzuki Hiroshi Ando ATR Human Information
More informationMultiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku
Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku Todd K. Moon and Jacob H. Gunther Utah State University Abstract The popular Sudoku puzzle bears structural resemblance to
More informationGen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population.
An Experimental Comparison of Genetic Programming and Inductive Logic Programming on Learning Recursive List Functions Lappoon R. Tang Mary Elaine Cali Raymond J. Mooney Department of Computer Sciences
More informationEvolving Multilayer Neural Networks using Permutation free Encoding Technique
Evolving Multilayer Neural Networks using Permutation free Encoding Technique Anupam Das and Saeed Muhammad Abdullah Department of Computer Science and Engineering, Bangladesh University of Engineering
More informationExtensive research has been conducted, aimed at developing
Chapter 4 Supervised Learning: Multilayer Networks II Extensive research has been conducted, aimed at developing improved supervised learning algorithms for feedforward networks. 4.1 Madalines A \Madaline"
More informationCOMBINING NEURAL NETWORKS FOR SKIN DETECTION
COMBINING NEURAL NETWORKS FOR SKIN DETECTION Chelsia Amy Doukim 1, Jamal Ahmad Dargham 1, Ali Chekima 1 and Sigeru Omatu 2 1 School of Engineering and Information Technology, Universiti Malaysia Sabah,
More informationAlgorithm Design (4) Metaheuristics
Algorithm Design (4) Metaheuristics Takashi Chikayama School of Engineering The University of Tokyo Formalization of Constraint Optimization Minimize (or maximize) the objective function f(x 0,, x n )
More information11/14/2010 Intelligent Systems and Soft Computing 1
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More information4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.
1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when
More informationA NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION
A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION Manjeet Singh 1, Divesh Thareja 2 1 Department of Electrical and Electronics Engineering, Assistant Professor, HCTM Technical
More informationCHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM
20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:
More informationEvolving SQL Queries for Data Mining
Evolving SQL Queries for Data Mining Majid Salim and Xin Yao School of Computer Science, The University of Birmingham Edgbaston, Birmingham B15 2TT, UK {msc30mms,x.yao}@cs.bham.ac.uk Abstract. This paper
More informationEvolutionary Algorithms. CS Evolutionary Algorithms 1
Evolutionary Algorithms CS 478 - Evolutionary Algorithms 1 Evolutionary Computation/Algorithms Genetic Algorithms l Simulate natural evolution of structures via selection and reproduction, based on performance
More informationResearch on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm
Acta Technica 61, No. 4A/2016, 189 200 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm Jianrong Bu 1, Junyan
More informationImage Compression: An Artificial Neural Network Approach
Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and
More informationJournal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems *
Journal of Global Optimization, 10, 1{40 (1997) c 1997 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Discrete Lagrangian-Based Global-Search Method for Solving Satisability Problems
More informationOptimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of C
Optimum Alphabetic Binary Trees T. C. Hu and J. D. Morgenthaler Department of Computer Science and Engineering, School of Engineering, University of California, San Diego CA 92093{0114, USA Abstract. We
More informationResearch on outlier intrusion detection technologybased on data mining
Acta Technica 62 (2017), No. 4A, 635640 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on outlier intrusion detection technologybased on data mining Liang zhu 1, 2 Abstract. With the rapid development
More informationsize, runs an existing induction algorithm on the rst subset to obtain a rst set of rules, and then processes each of the remaining data subsets at a
Multi-Layer Incremental Induction Xindong Wu and William H.W. Lo School of Computer Science and Software Ebgineering Monash University 900 Dandenong Road Melbourne, VIC 3145, Australia Email: xindong@computer.org
More informationAN NOVEL NEURAL NETWORK TRAINING BASED ON HYBRID DE AND BP
AN NOVEL NEURAL NETWORK TRAINING BASED ON HYBRID DE AND BP Xiaohui Yuan ', Yanbin Yuan 2, Cheng Wang ^ / Huazhong University of Science & Technology, 430074 Wuhan, China 2 Wuhan University of Technology,
More informationAn Approach to Polygonal Approximation of Digital CurvesBasedonDiscreteParticleSwarmAlgorithm
Journal of Universal Computer Science, vol. 13, no. 10 (2007), 1449-1461 submitted: 12/6/06, accepted: 24/10/06, appeared: 28/10/07 J.UCS An Approach to Polygonal Approximation of Digital CurvesBasedonDiscreteParticleSwarmAlgorithm
More informationGENERATING FUZZY RULES FROM EXAMPLES USING GENETIC. Francisco HERRERA, Manuel LOZANO, Jose Luis VERDEGAY
GENERATING FUZZY RULES FROM EXAMPLES USING GENETIC ALGORITHMS Francisco HERRERA, Manuel LOZANO, Jose Luis VERDEGAY Dept. of Computer Science and Articial Intelligence University of Granada, 18071 - Granada,
More informationResearch Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding
e Scientific World Journal, Article ID 746260, 8 pages http://dx.doi.org/10.1155/2014/746260 Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding Ming-Yi
More informationIMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM
Annals of the University of Petroşani, Economics, 12(4), 2012, 185-192 185 IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM MIRCEA PETRINI * ABSTACT: This paper presents some simple techniques to improve
More informationThe only known methods for solving this problem optimally are enumerative in nature, with branch-and-bound being the most ecient. However, such algori
Use of K-Near Optimal Solutions to Improve Data Association in Multi-frame Processing Aubrey B. Poore a and in Yan a a Department of Mathematics, Colorado State University, Fort Collins, CO, USA ABSTRACT
More informationUsing Local Trajectory Optimizers To Speed Up Global. Christopher G. Atkeson. Department of Brain and Cognitive Sciences and
Using Local Trajectory Optimizers To Speed Up Global Optimization In Dynamic Programming Christopher G. Atkeson Department of Brain and Cognitive Sciences and the Articial Intelligence Laboratory Massachusetts
More informationAutomatic Generation of Test Case based on GATS Algorithm *
Automatic Generation of Test Case based on GATS Algorithm * Xiajiong Shen and Qian Wang Institute of Data and Knowledge Engineering Henan University Kaifeng, Henan Province 475001, China shenxj@henu.edu.cn
More informationChapter 14 Global Search Algorithms
Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.
More informationEcient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines
Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines Zhou B. B., Brent R. P. and Tridgell A. y Computer Sciences Laboratory The Australian National University Canberra,
More informationGenetic Algorithms for Solving. Open Shop Scheduling Problems. Sami Khuri and Sowmya Rao Miryala. San Jose State University.
Genetic Algorithms for Solving Open Shop Scheduling Problems Sami Khuri and Sowmya Rao Miryala Department of Mathematics and Computer Science San Jose State University San Jose, California 95192, USA khuri@cs.sjsu.edu
More informationChapter 5 Components for Evolution of Modular Artificial Neural Networks
Chapter 5 Components for Evolution of Modular Artificial Neural Networks 5.1 Introduction In this chapter, the methods and components used for modular evolution of Artificial Neural Networks (ANNs) are
More informationVariable Neighborhood Search for Solving the Balanced Location Problem
TECHNISCHE UNIVERSITÄT WIEN Institut für Computergraphik und Algorithmen Variable Neighborhood Search for Solving the Balanced Location Problem Jozef Kratica, Markus Leitner, Ivana Ljubić Forschungsbericht
More informationGenetic Algorithms, Numerical Optimization, and Constraints. Zbigniew Michalewicz. Department of Computer Science. University of North Carolina
Genetic Algorithms, Numerical Optimization, and Constraints Zbigniew Michalewicz Department of Computer Science University of North Carolina Charlotte, NC 28223 Abstract During the last two years several
More informationNeural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer
More informationData Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University
Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Search & Optimization Search and Optimization method deals with
More informationl 8 r 3 l 9 r 1 l 3 l 7 l 1 l 6 l 5 l 10 l 2 l 4 r 2
Heuristic Algorithms for the Terminal Assignment Problem Sami Khuri Teresa Chiu Department of Mathematics and Computer Science San Jose State University One Washington Square San Jose, CA 95192-0103 khuri@jupiter.sjsu.edu
More informationGenetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem
etic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem R. O. Oladele Department of Computer Science University of Ilorin P.M.B. 1515, Ilorin, NIGERIA
More informationArtificial Intelligence
Artificial Intelligence Informed Search and Exploration Chapter 4 (4.3 4.6) Searching: So Far We ve discussed how to build goal-based and utility-based agents that search to solve problems We ve also presented
More informationAn Evolutionary Algorithm for Minimizing Multimodal Functions
An Evolutionary Algorithm for Minimizing Multimodal Functions D.G. Sotiropoulos, V.P. Plagianakos and M.N. Vrahatis University of Patras, Department of Mamatics, Division of Computational Mamatics & Informatics,
More informationA new way to optimize LDPC code in gaussian multiple access channel
Acta Technica 62 (2017), No. 4A, 495504 c 2017 Institute of Thermomechanics CAS, v.v.i. A new way to optimize LDPC code in gaussian multiple access channel Jingxi Zhang 2 Abstract. The code design for
More informationCHAPTER 6 REAL-VALUED GENETIC ALGORITHMS
CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS 6.1 Introduction Gradient-based algorithms have some weaknesses relative to engineering optimization. Specifically, it is difficult to use gradient-based algorithms
More informationGenetic Algorithms Variations and Implementation Issues
Genetic Algorithms Variations and Implementation Issues CS 431 Advanced Topics in AI Classic Genetic Algorithms GAs as proposed by Holland had the following properties: Randomly generated population Binary
More informationA Compensatory Wavelet Neuron Model
A Compensatory Wavelet Neuron Model Sinha, M., Gupta, M. M. and Nikiforuk, P.N Intelligent Systems Research Laboratory College of Engineering, University of Saskatchewan Saskatoon, SK, S7N 5A9, CANADA
More informationA Generalized Permutation Approach to. Department of Economics, University of Bremen, Germany
A Generalized Permutation Approach to Job Shop Scheduling with Genetic Algorithms? Christian Bierwirth Department of Economics, University of Bremen, Germany Abstract. In order to sequence the tasks of
More informationProceedings of the First IEEE Conference on Evolutionary Computation - IEEE World Congress on Computational Intelligence, June
Proceedings of the First IEEE Conference on Evolutionary Computation - IEEE World Congress on Computational Intelligence, June 26-July 2, 1994, Orlando, Florida, pp. 829-833. Dynamic Scheduling of Computer
More informationUsing CODEQ to Train Feed-forward Neural Networks
Using CODEQ to Train Feed-forward Neural Networks Mahamed G. H. Omran 1 and Faisal al-adwani 2 1 Department of Computer Science, Gulf University for Science and Technology, Kuwait, Kuwait omran.m@gust.edu.kw
More information[13] W. Litwin. Linear hashing: a new tool for le and table addressing. In. Proceedings of the 6th International Conference on Very Large Databases,
[12] P. Larson. Linear hashing with partial expansions. In Proceedings of the 6th International Conference on Very Large Databases, pages 224{232, 1980. [13] W. Litwin. Linear hashing: a new tool for le
More informationAn Empirical Study of Software Metrics in Artificial Neural Networks
An Empirical Study of Software Metrics in Artificial Neural Networks WING KAI, LEUNG School of Computing Faculty of Computing, Information and English University of Central England Birmingham B42 2SU UNITED
More informationAccelerating the convergence speed of neural networks learning methods using least squares
Bruges (Belgium), 23-25 April 2003, d-side publi, ISBN 2-930307-03-X, pp 255-260 Accelerating the convergence speed of neural networks learning methods using least squares Oscar Fontenla-Romero 1, Deniz
More informationComparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems
Australian Journal of Basic and Applied Sciences, 4(8): 3366-3382, 21 ISSN 1991-8178 Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems Akbar H. Borzabadi,
More informationLECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS
LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class
More informationDept. of Computer Science. The eld of time series analysis and forecasting methods has signicantly changed in the last
Model Identication and Parameter Estimation of ARMA Models by Means of Evolutionary Algorithms Susanne Rolf Dept. of Statistics University of Dortmund Germany Joachim Sprave y Dept. of Computer Science
More informationFall 09, Homework 5
5-38 Fall 09, Homework 5 Due: Wednesday, November 8th, beginning of the class You can work in a group of up to two people. This group does not need to be the same group as for the other homeworks. You
More informationReducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm
Reducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm Dr. Ian D. Wilson School of Technology, University of Glamorgan, Pontypridd CF37 1DL, UK Dr. J. Mark Ware School of Computing,
More informationNeuro-Remodeling via Backpropagation of Utility. ABSTRACT Backpropagation of utility is one of the many methods for neuro-control.
Neuro-Remodeling via Backpropagation of Utility K. Wendy Tang and Girish Pingle 1 Department of Electrical Engineering SUNY at Stony Brook, Stony Brook, NY 11794-2350. ABSTRACT Backpropagation of utility
More informationArtificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)
Artificial Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to: Understand natural biological systems through computational modeling. Model intelligent
More informationFuzzy Signature Neural Networks for Classification: Optimising the Structure
Fuzzy Signature Neural Networks for Classification: Optimising the Structure Tom Gedeon, Xuanying Zhu, Kun He, and Leana Copeland Research School of Computer Science, College of Engineering and Computer
More informationCHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES
CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving
More informationModified Order Crossover (OX) Operator
Modified Order Crossover (OX) Operator Ms. Monica Sehrawat 1 N.C. College of Engineering, Israna Panipat, Haryana, INDIA. Mr. Sukhvir Singh 2 N.C. College of Engineering, Israna Panipat, Haryana, INDIA.
More informationDepartment of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan
Shape Modeling from Multiple View Images Using GAs Satoshi KIRIHARA and Hideo SAITO Department of Electrical Engineering, Keio University 3-14-1 Hiyoshi Kouhoku-ku Yokohama 223, Japan TEL +81-45-563-1141
More informationOptimization of Noisy Fitness Functions by means of Genetic Algorithms using History of Search with Test of Estimation
Optimization of Noisy Fitness Functions by means of Genetic Algorithms using History of Search with Test of Estimation Yasuhito Sano and Hajime Kita 2 Interdisciplinary Graduate School of Science and Engineering,
More informationNon-deterministic Search techniques. Emma Hart
Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting
More informationCOMPLETE INDUCTION OF RECURRENT NEURAL NETWORKS. PETER J. ANGELINE IBM Federal Systems Company, Rt 17C Owego, New York 13827
COMPLETE INDUCTION OF RECURRENT NEURAL NETWORKS PETER J. ANGELINE IBM Federal Systems Company, Rt 7C Owego, New York 3827 GREGORY M. SAUNDERS and JORDAN B. POLLACK Laboratory for Artificial Intelligence
More informationSingle and Multiple Frame Video Trac. Radu Drossu 1. Zoran Obradovic 1; C. Raghavendra 1
Single and Multiple Frame Video Trac Prediction Using Neural Network Models Radu Drossu 1 T.V. Lakshman 2 Zoran Obradovic 1; C. Raghavendra 1 1 School of Electrical Engineering and Computer Science Washington
More informationStrategy for Individuals Distribution by Incident Nodes Participation in Star Topology of Distributed Evolutionary Algorithms
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 1 Sofia 2016 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2016-0006 Strategy for Individuals Distribution
More informationA *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,
The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure
More informationCHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION
131 CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 6.1 INTRODUCTION The Orthogonal arrays are helpful in guiding the heuristic algorithms to obtain a good solution when applied to NP-hard problems. This
More informationEstivill-Castro & Murray Introduction Geographical Information Systems have served an important role in the creation and manipulation of large spatial
Spatial Clustering for Data Mining with Genetic Algorithms Vladimir Estivill-Castro Neurocomputing Research Centre Queensland University of Technology, GPO Box 44, Brisbane 4, Australia. vladimir@fit.qut.edu.au
More informationEcient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines
Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines B. B. Zhou, R. P. Brent and A. Tridgell Computer Sciences Laboratory The Australian National University Canberra,
More informationGenetic Algorithm for Circuit Partitioning
Genetic Algorithm for Circuit Partitioning ZOLTAN BARUCH, OCTAVIAN CREŢ, KALMAN PUSZTAI Computer Science Department, Technical University of Cluj-Napoca, 26, Bariţiu St., 3400 Cluj-Napoca, Romania {Zoltan.Baruch,
More informationA HYBRID APPROACH TO GLOBAL OPTIMIZATION USING A CLUSTERING ALGORITHM IN A GENETIC SEARCH FRAMEWORK VIJAYKUMAR HANAGANDI. Los Alamos, NM 87545, U.S.A.
A HYBRID APPROACH TO GLOBAL OPTIMIZATION USING A CLUSTERING ALGORITHM IN A GENETIC SEARCH FRAMEWORK VIJAYKUMAR HANAGANDI MS C2, Los Alamos National Laboratory Los Alamos, NM 87545, U.S.A. and MICHAEL NIKOLAOU
More informationphase transition. In summary, on the 625 problem instances considered, MID performs better than the other two EAs with respect to the success rate (i.
Solving Binary Constraint Satisfaction Problems using Evolutionary Algorithms with an Adaptive Fitness Function A.E. Eiben 1;2, J.I. van Hemert 1, E. Marchiori 1;2 and A.G. Steenbeek 2 1 Dept. of Comp.
More informationCombined Weak Classifiers
Combined Weak Classifiers Chuanyi Ji and Sheng Ma Department of Electrical, Computer and System Engineering Rensselaer Polytechnic Institute, Troy, NY 12180 chuanyi@ecse.rpi.edu, shengm@ecse.rpi.edu Abstract
More information