Learning Fuzzy Rules by Evolution for Mobile Agent Control

Size: px
Start display at page:

Download "Learning Fuzzy Rules by Evolution for Mobile Agent Control"

Transcription

1 Learning Fuzzy Rules by Evolution for Mobile Agent Control George Chronis, James Keller and Marjorie Skubic Dept. of Computer Engineering and Computer Science, University of Missouri-Columbia, Abstract In this paper we propose a learning mechanism for mobile agent navigation. The agent is controlled by a dynamic set of fuzzy rules, where the rule set is learned using genetic algorithms. The rules are adjusted during a training session and tested after satisfactory behavior is observed. This approach provides for learning different navigation schemes, depending on the required behavior of the agent, without dramatic changes in the code, except for the evaluation function. In this work we tested the learning scheme for a situation where the agent has to approach a given set of 2-D coordinates, while avoiding s in an unknown dynamic environment. 1. Introduction The problem of navigation in a changing, unknown environment is addressed in this paper. A mobile robot is required to approach a target while making its way around s in unknown locations; both the target and the s may be moving. Uncertainty about the environment and the need for an efficient path to the target pose an interesting challenge for a controller design. This paper focuses on achieving an acceptable behavior in an uncertain and dynamic environment. It is unreasonable to claim that an algorithm exists that will always suggest an optimal path in a changing, unknown world. We do not make this claim here either. The strength of our approach is that a control algorithm is not hard-coded; instead, a set of rules is learned using evolutionary methods, so they are adjusted to the current problem. The system adjusts the control scheme by extracting information from the current requirements, without the need for reprogramming. Several schemes have been proposed which address the problem of mobile agent navigation. These may be roughly classified into deliberative, reactive and hybrid. Deliberative approaches are concerned with a decomposition of the problem into steps that are executed serially. These architectures can guarantee an optimal solution to a problem, provided that a solution exists, the problem is well-defined, and the environment does not change. Purely reactive control schemes, on the other hand, rely on mappings between the sensors and the actuators, which enable the robot to respond rapidly to world changes [4]. These schemes are not computationally intensive, but cannot guarantee an optimal solution. Behavior-based architectures emerged from reactive schemes. Hybrid control schemes have been proposed to combine the best of both worlds: low-level reactive control for dealing with uncertainty combined with high-level planning. A typical problem with these approaches is how to handle local minima [1] [7]. Navigation schemes that combine fuzzy set theory, genetic algorithms, and neural networks have also been proposed in the literature to solve the problem of mobile agent navigation [2][3][5][8][9]. By incorporating learning, preprogramming of specific algorithms is eliminated. This way of building intelligence through experience is closer to the biological methods of evolution. However, the learned system may provide little explanation for why specific actions were taken. In the next section we discuss one such approach that uses genetic algorithms to learn a set of fuzzy rules for achieving a goal. We present an extension of Bonarini s work [2][3], in which we demonstrate a simpler and faster algorithm for a real world situation. The problem domain has been changed to illustrate how the algorithm can generalize easily for different problems and may be adjusted without significant programming effort to accomplish a variety of tasks. We also relaxed constraints about a known environment, such as a priori knowledge in the form of maps, introduced dynamic s and moving targets, and ran several experiments in complicated worlds. A Nomad200 robot is trained offline to reach a landmark through an unknown field. After training, it is tested in a real environment not necessarily the same as the one set up for training. In Section 3, we present the results and analysis of our experiments on the Nomad200. Finally, in Section 4, we conclude with future applications and enhancements. 2. The Fuzzy-Genetic Approach In our evolutionary learning strategy, a set of fuzzy rules are dynamically learned for mobile robot navigation. Specifically, the system learns the antecedents and the consequents of the rules (in the form of linguistic variables), to satisfy the requirements specified by the user via an evaluation function. Evolutionary techniques like survival of the fittest, mutation, and single-point crossover are used for learning. The number of rules is also dynamically adjusted by the system, and the best rules are kept. The membership functions of the fuzzy sets that represent the system variables are fixed, as

2 shown in Figure 1. The agent may start with an initial set of random or previously learned rules, or it may start with no rules. For different specified tasks, different fuzzy controllers are learned. Learning is achieved using techniques borrowed from the fields of genetic algorithms and psychology. Learning occurs during the training sessions. Once the output of the system is satisfactory, training stops, and the resulting fuzzy controller may be tested in the same environment or in a dynamically changing one. We represent the fuzzy rules as chromosomes, and their parts as genes, to use genetic algorithms for learning. Genetic algorithms are used as search algorithms based on the mechanics of natural selection and natural genetics. They combine survival of the fittest among string structures (chromosomes) with a structured yet randomized information exchange [6] Terms, Definitions and Structures The controller is governed by a dynamically changing set of fuzzy rules. The number of rules may change with time, and the number of antecedents and consequents and their values may also change. This dynamic set of rules is called the population of fuzzy rules. The initial population can be zero or greater than zero. Each rule has the following general form: If distance is <x> and is <y> and target is <z> then robot is <w> The variables are defined as follows: distance = the distance from the agent to the = the relative of the with respect to the agent target = the relative of the target with respect to the agent robot = the relative in which the agent should turn, with respect to its current position The membership functions for the variables are shown in Figure 1. The representations of, target, and robot are the same. Each rule is represented by a string, which is called a chromosome. Each chromosome has genes that represent the parts of the antecedent and the consequent of the rule. The values of each gene are not strictly 0 or 1, as in genetic algorithms, but vary depending on the membership value of each variable. Each variable used by the controller is either a part of the antecedent or the consequent. Hence, each gene actually represents a variable. For example, the rule If distance is close, and is front, and target is right, then robot is right is represented by a chromosome with four genes corresponding to the four fuzzy variables, as shown in Figure 2. A more detailed description of the rule structure is given in Section 2.3. membership value VC C M F VF membership value FFRR BRB BL L FL F VC: Very Close C: Close M: Medium F: Far VF: Very Far distance F: Front FR: Front Right R: Right BR: Back Right B: Back BL: Back Left L: Left FL: Front Left Figure 1. Linguistic Variables used by the System distance target robot C F R R Figure 2. A chromosome representing a rule A don t care value is represented by a -1 in the gene, to account for missing variables from a rule. A rule containing a missing variable is equivalent to a set of rules that have all the possible values for this variable. To speed the learning process, the rule population is divided into sub-populations that contain rules with the same values in their antecedents. The environment as perceived by the robot sensors is called a state. Subpopulations allow for only the rules relevant to a specific state to be fired. Note that since don t cares are allowed, a specific rule could belong to more than one subpopulation. Furthermore, a rule may belong to more than one sub-population in different degrees, since these are fuzzy rules. Also, a state could be matched by more than one sub-population in different degrees. However, the rules that belong to sub-populations that do not match the current state at all do not get considered by the system during this state. This way the system learns faster. Each evaluation of rules that results in a specific action is called an action cycle. Action cycles are grouped into stages. During a stage, rules are not reinforced after every cycle but rather at the end of the stage. In Section 3.2 we show how this technique results in better handling

3 of local minima situations and the competition and cooperation problem, as well as incomplete or delayed. A training session ends if the goal is met (i.e. reaching a target without bumping into s), if a failure occurs (bumping into an ), or the imum number of cycles is exceeded. This sequence of cycles is called a trial. The output of each trial is a rule base that should be sufficient to navigate the robot to achieve its goal. This rule base, which is equivalent to a fuzzy logic controller (FLC), has as many rules as the number of the final sub-populations. That is, only one rule is picked from each sub-population for the final FLC. This rule is the fittest rule of the sub-population The Algorithm The evolutionary algorithm has strong similarities to the fuzzy Q-learning algorithm, the ELF algorithm proposed by Bonarini [2][3], and genetic algorithms. However, there are significant differences that will be discussed in the next section, primarily in the use of genetic algorithms, fitness functions and population updates, which enhance system performance, and allow use of the algorithm in dynamic environments. In the beginning, all variables are initialized and the initial state is captured by the robot sensors. Then, the trial session begins, which consists of several stages. A stage ends when either the goal is reached, or a predefined number of cycles per stage is completed (typically 5 for this application), or a failure occurs. A trial ends when a predefined number of cycles is completed (typically 15,000 for this application), or the goal is reached, or a failure occurs. The goal is reached when imum is awarded to the rules during the session. A failure occurs when the agent bumps into an. During each stage, the sub-populations whose rules match the current state are selected for evaluation. One rule is selected randomly from each sub-population to construct a set of rules that will fire. This is done so every rule has a chance to fire and get evaluated. All rules compete to offer the best outcome for the system, so by giving every rule a chance to fire, we take care of the competition problem [3]. However, if a sub-population matches a second state of the same stage, the same rule is selected as in the first state, instead of a random rule. This is done so that the sub-population (through the selected rule) is given a chance to demonstrate its abilities before occurs. This is the whole point behind the idea of stages: using delayed to evaluate sequences of actions rather than every single action. This is a step towards the cooperation of the rules. After all the rules fire, the state is updated and a new cycle begins within the same stage. The learning algorithm is shown in Figure 4. initialization (get initial state etc.) while not end of trial while not end of stage select sub-populations that match the current state for all selected sub-populations (sp) if sp has been matched before in this stage select the same rule that was used before else select a random rule end if fire selected rules end for update the state end while distribute update the population end while Figure 4. The Learning Algorithm When the stage is completed, the process begins. There is a function that assigns a strength to each of the rules that took part in the stage. The formula is: s t = s t-1 + (r t s t-1 ) * c g /p g t is the current cycle s t is the strength of the rule being evaluated at cycle t g is the current stage c g is the contribution of each rule fired during stage g p g is a measure of the contribution of all rules fired during previous stages r t is the, which is calculated as follows: r t = (init_dis- cur_dis) * exp(obs_dis) init_dis is the initial distance of the agent to the target cur_dis is the current distance of the agent to the target obs_dis is the distance of the agent to an The rule strength is incremented by a quantity proportional to the difference between the present and the past strength, multiplied by a learning rate, c g /p g. This rate is the ratio between the contribution of the rule in the current stage and the contribution of the rule in past stages, which actually denotes how much the rule has been tested. The contribution of the rules to the current stage is calculated as follows: c g (f e) = µ e (f e) / µ e (f i ) e E(g) e E(g) f i R(g)

4 µ e is the combined membership value of the antecedents of the rule to the state e e is the current state E(g) is the set of states visited during stage g f e is the current rule that fires f i is any rule from the set R of triggering rules for stage g The first summation is the sum of all the outputs (membership functions) of the rule during all the cycles in the stage (i.e., for every state in that stage). The second (double) summation is the sum of the output of all the rules that participated in all the cycles in the stage. For example, assume a stage with two rules has a length of two cycles. Assume that the first rule has an output of.2 and.3 in the two cycles respectively, and the second rule an output of.4 and.5. The contribution (c) of the first rule at the end of the stage is: (.2 +.3) / ( ). In this way, a rule contributes to the global action proportionally to its degree of firing, which in turn is proportional to the degree of matching the current state. The contribution measure, p, is updated at the end of each stage, by adding c to the old value of p, up to a given imum. This imum signifies that the rule has been tested enough, and from then on p is constant. A typical imum value for this application is 15. The function includes a positive factor computed from the current target distance (relative to the initial target distance). The closer the distance, the bigger the is. This factor is scaled by the exponential of the distance to the closest. If the agent is too close to the, the due to the target is scaled down, since the exponential will be less than 1. When the agent gets further away from an, it is rewarded by multiplying its by a factor greater than 1. Notice that there is a case of negative, when the agent gets further away from the target than its starting position. After the process takes place, the fuzzy rule population is updated at the end of each stage. First, the new state has to be matched to at least one subpopulation. If no such sub-population exists, then a new one is created by the controller. The new sub-population has one new rule, which is composed of the antecedents that best match the new state, and a random consequent. This rule might contain don t care values for one or more fuzzy variables. This is a way for the population to evolve, and for an initial population to be created. In addition, there should be a way to eliminate old and weak rules, so that the size of the population does not increase dramatically by creating one rule for every given state. To achieve this, a function that monitors the size of the population prunes the weaker rules when necessary. According to the evaluation function the fittest rules survive to the next generation and the rest are deleted, so that a certain average in the number of rules is maintained in every sub-population. The population monitor function monitors the subpopulations and adjusts the rule cardinality using the following heuristic formula: optimal_cardinality = (1, (_r - _r *.1 - _out)/(_r *.1)) _r is the imum (typically 1000 for this application) _out is the imum output of the system so far (in terms of ) This formula ensures that in the beginning a large population will be created to explore a large search space. As the output of the system becomes better with increased performance, the cardinality of the sub-populations decreases, and rules eventually get deleted. Thus, not only the size of the rule base is kept to a rational level, but the FLCs at the end are more precise and more accurate. Also, the very big advantage of this approach is that local minima can be avoided, by mutation on the rules, as we will discuss in Section 3. Population updating can also take place even if the cardinality of the rules meets the optimal cardinality. This is in case the system senses a local minimum. By mutation, local minima can be escaped, as is done by genetic algorithms. Multipoint crossovers, uniform crossover, inversion, as well as gene circulation in the chromosomes could also have been used, but they are not necessary for the specific application, since the length of the chromosome is only 4. However, for more complicated antecedents, different genetic operators can be integrated in the system Implementation In this section, we present the main structures used for implementing the learning algorithm. The subpopulation (sp) vector (Figure 5) contains the different sub-populations that are active every time. Each entry in the sp vector contains information about which rules belong to the sp, which was the last rule fired, and during which cycle. sp 0 1 sub-population sp[i][0]+1 sp[i][0]+2 index of rule number of number of rule 1 the rule that rules in this sp index of matched the rule 2 previous state (0, if none) number of last stage cycle that the rule in sp[i][0]+1 was triggered Figure 5. The sub-population structure The rule vector (Figure 6) contains the antecedent and consequent parts of each rule, and values for

5 calculating its strength, like c, p, and whether it is an active rule or not. rule 0 1 rule 0 distance target robot (output) Figure 6. The rule structure c p strength last cycle it was used active reserved fired in last stage Finally, the state vector contains only three entries for the three values of the antecedent. Each value of the fuzzy sets is represented by an integer from 0 to the number of values for each variable from left to right. So, for the distance there are 5 values, 0 representing very close, 1 representing close, etc. 3. Experimental Results and Analysis The experiment consists of two phases: training and testing. The target is specified by a pair of 2-D coordinates, and may dynamically change during testing or training, to simulate the behavior of following a moving target. The s are also allowed to move randomly. Apparently, the rules are not constructed in a way that makes them depend on specific environmental arrangements. The abstract structure of the rules captures features of the environment during the training phase and adapts the behavior of the robot as specified via the fitness function. Thus, we relax the limitation mentioned in [2] referring to fuzzy rules that are learned only for prespecified environments and problems. 3.1 Experiments In the experiment, we trained the Nomad200 on a simulator and tested the resulting FLCs on the real robot. We trained the rule base for 25,000 cycles per trial, using 7 cycles per stage for delayed, which took about 5 minutes in real time for each trial. The average number of active rules was 45, and the resulting FLCs consisted of about 12 rules. It took 16 trials to reach a satisfactory success rate (measured in units) of about 89%. These numbers are slightly bigger than those presented in [2] (about 12 trials with 15,000 cycles per trial). However, the training computation time is significantly smaller, since no correlation factors were used, and the average cycle time is smaller due to simplifications in the algorithm. The larger number of cycles and trials reflects the extra effort to adjust the rules for dynamic environments. Thus, greater generalization is achieved without an increase in the failure rate. The graph of Figure 7b shows the learning progress in terms of, during the last trial period for training. Figure 7a shows the actual environment used for training. Recall from Section 2 that the closer the robot is to the target, the bigger the it receives. Note that the robot saves the rule base in point A, mutates to avoid the local minimum, encounters the local minimum again in point B, saves the rule base and mutates again, to finally escape the minimum and declare success at point C. start Figure 7a. Training Environment A min mutation mutation B min Figure 7b. Learning at final trial session C success After training, we tested two different FLCs: the one that was saved at point B before the mutation (FLC 1), and the final one at point C (FLC 2). The results in a static environment, slightly different from the one used for training, are illustrated in Figures 8 and 9 which show the distribution during testing. The value is calculated during testing to serve as a performance measure (i.e., the relative distance to the target point). Notice the indication that the system is actually learning how to avoid local minima, and not just how to reach a goal and avoid s in the way. The FLC at point B (Figure 8) fails to escape the local minimum, while the final FLC (Figure 9) has learned how to handle a situation like this, and reacts immediately to escape the local minimum. C T A&B

6 T start Figure 8a. Testing environment for FLC 1 local minimum Figure 8b. Reinforcement progress for FLC 1 start Figure 9a. Testing environment for FLC 2. local minimum Figure 9b. Reinforcement progress for FLC 2. Finally, in Figure 10, we show how the agent behaves in a dynamic environment. The results were obtained after simultaneously moving s, introducing new s, and changing the goal location (forcing the agent to follow a moving target). The abrupt changes in the curve signify corresponding changes in the environment. Since the environment is actually changing faster than the robot can move, there are sudden changes in, depending on the that the target and the s move. The agent still maintains a reasonable behavior receiving an average that indicates a robust learned FLC. T Figure 10. Testing in dynamic environment 3.2 Analysis This implementation of a fuzzy controller deals with several problems a designer of an autonomous mobile agent has to face. The use of learning relaxes the designer from the task of specifying a static set of rules. However, machine learning introduces new challenges to the programmer of the controller. Some of the most interesting features shown by FLCs come from the interaction among the fuzzy rules that match a given state. Therefore, cooperation among these fuzzy rules is desirable. On the other hand, evolutionary algorithms need to evaluate the contribution that single members of a population give to the performance of the system. In other words, there is competition among the members of the population. This implementation uses the concept of sub-populations to handle competition and cooperation. The rules within the same sub-population compete for the optimal action, while the sub-populations cooperate to generate the best path in the search space. This controller performs very well in a dynamic environment because it does not generate rules to predict events that might never take place. It is important to identify whether a state is relevant for a given behavior. For instance, let us consider a simple behavior like recharge as part of a more complex behavioral architecture for an autonomous agent. The recharge behavior is responsible for making the agent navigate towards a recharge station when its power is running low. If the agent knows that it should activate the recharge behavior only when it is low on power, it may avoid considering it elsewhere, thus obtaining a more efficient control. Moreover, if the agent knows that it should learn this behavior only when it has a power constraint, it should avoid wasting its time to learn it in a case when the power is more than adequate for task completion. This controller considers only the rules that cover the states that occur during the learning trial. Thus, if a state is never visited, the corresponding rules are never generated. We consider only the visited search space.

7 Imperfect or delayed is another problem that arises in the design of a learning algorithm using reward and punishment procedures. In order for a rule to demonstrate its abilities of navigating the agent, it has to be given a fair amount of time (or cycles) to do so. This is the idea behind the use of stages. Rules are only evaluated at the end of each stage. The rule base has the opportunity to drive the agent away from local minima by being given a certain number of cycles (a stage) to do so. Then, at the end of each stage, the rule base is reinforced. Thus, we avoid local minima that would have occurred if the rules were reinforced after each cycle. One of the most common problems that occur in achieving different goals simultaneously is the problem of local minima. A sub-optimal solution may be reached by the algorithm and the process will not allow the algorithm to progress to a different state and achieve a better solution. Since the length of each stage is fixed, it is not guaranteed that the algorithm will escape local minima, just by using delayed. For this reason another function exists in the controller that monitors the performance of the system in relation to the size of the rule base. If the rule base remains the same over a certain period of time and the system performs better than a certain threshold, while still having not received imum, the rule base is saved and the worst rules are mutated to escape the local minimum. The threshold is selected empirically (72% of the imum ). If the population has not been updated for a given number of stages, and the system performs better than the threshold, we assume that the agent is stuck at a local minimum. By mutating rules, the agent escapes this minimum, and proceeds in searching for the global minimum. However, the rule base is saved before mutation and presented as one sub-optimal solution at the end of the trial. This is similar to the way genetic algorithms overcome local minima. More complicated genetic operators like multiple-point crossover, uniform crossover or inversion may also be used in more complex applications. Overall the system performed fairly well, resulting in rule bases that succeed about 89% of the time, after trial periods. For each trial period, there was a imum of 1000, 5 cycles per stage and a imum of cycles. 4. Concluding Remarks We have shown how genetic algorithms can be used to learn fuzzy rule combinations for agent navigation in dynamic, unstructured environments. The method presented has been tested in a real world environment and is suitable for real-time navigation, after a certain amount of training sessions in a simulated world. We have not yet implemented an algorithm that learns other variables used by the controller such as the optimal number of active rules, the fuzzy membership functions, parts of the function, and ways of updating the population (e.g., mutation rates, which chromosomes to choose for crossover). We believe that future research could determine the tradeoffs between computational resources and machine learning optimization, if more variables are actually learned by the system. We would like to enhance the evolutionary approach described in this paper to make it capable of performing more complicated tasks, with minimal programming efforts. References [1] R. C. Arkin, Motor Schema Based Navigation For A Mobile Robot: An Approach To Programming By Behavior, IEEE, [2] A. Bonarini and F. Basso, Learning to compose fuzzy behaviors for autonomous agents, International Journal of Approximate Reasoning, 11:1-158 New York, NY 1994 [3] A. Bonarini, Anytime learning and adaptation of structures fuzzy behaviors, Special Issue of the Adaptive Behavior Journal About Complete agent learning in complex environments, Maja Mataric (Ed.), n.5, 1997 [4] R. A. Brooks, A Robust Layered Control System for a Mobile Robot, IEEE Journal of Robotics and Automation, vol. RA-2, no.1, pp , March [5] P. Y. Glorennec, Fuzzy Q-learning and dynamic fuzzy Q-learning, in Proc. Third IEEE Int. Conf. On Fuzzy Systems, IEEE Computer Press, Piscataway, NJ, pp , [6] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, [7] D. Payton, J. K. Rosenblatt and D. Keirsey, Plan Guided Reaction, in IEEE Transactions on Systems, Man and Cybernetics, vol. 20, no. 6, Nov./Dec., 1990, pp [8] A. Saffiotti, E. H. Ruspini, and K. Konolige, Robust Execution of Robot Plans Using Fuzzy Logic, Proceedings on the IJCAI-93 Workshop on Fuzzy Logic in Artificial Intelligence (IJCAI93_FUZZYLOGIC). Chamberry, France, [9] A. Saffiotti et al., A Fuzzy Controller for Flakey, the Robot, in Proceedings of the 11th National Conference on Artificial Intelligence (AAAI93). Washington, DC, USA,

Hybrid Genetic-Fuzzy Approach to Autonomous Mobile Robot

Hybrid Genetic-Fuzzy Approach to Autonomous Mobile Robot Hybrid Genetic-Fuzzy Approach to Autonomous Mobile Robot K.S. Senthilkumar 1 K.K. Bharadwaj 2 1 College of Arts and Science, King Saud University Wadi Al Dawasir, Riyadh, KSA 2 School of Computer & Systems

More information

Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot

Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot Koren Ward School of Information Technology and Computer Science University of Wollongong koren@uow.edu.au www.uow.edu.au/~koren Abstract

More information

Solving A Nonlinear Side Constrained Transportation Problem. by Using Spanning Tree-based Genetic Algorithm. with Fuzzy Logic Controller

Solving A Nonlinear Side Constrained Transportation Problem. by Using Spanning Tree-based Genetic Algorithm. with Fuzzy Logic Controller Solving A Nonlinear Side Constrained Transportation Problem by Using Spanning Tree-based Genetic Algorithm with Fuzzy Logic Controller Yasuhiro Tsujimura *, Mitsuo Gen ** and Admi Syarif **,*** * Department

More information

The k-means Algorithm and Genetic Algorithm

The k-means Algorithm and Genetic Algorithm The k-means Algorithm and Genetic Algorithm k-means algorithm Genetic algorithm Rough set approach Fuzzy set approaches Chapter 8 2 The K-Means Algorithm The K-Means algorithm is a simple yet effective

More information

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Chapter 5 A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Graph Matching has attracted the exploration of applying new computing paradigms because of the large number of applications

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Resolving the Conflict Between Competitive and Cooperative Behavior in Michigan-Type Fuzzy Classifier Systems

Resolving the Conflict Between Competitive and Cooperative Behavior in Michigan-Type Fuzzy Classifier Systems Resolving the Conflict Between Competitive and Cooperative Behavior in Michigan-Type Fuzzy Classifier Systems Peter Haslinger and Ulrich Bodenhofer Software Competence Center Hagenberg A-4232 Hagenberg,

More information

Self-learning Mobile Robot Navigation in Unknown Environment Using Evolutionary Learning

Self-learning Mobile Robot Navigation in Unknown Environment Using Evolutionary Learning Journal of Universal Computer Science, vol. 20, no. 10 (2014), 1459-1468 submitted: 30/10/13, accepted: 20/6/14, appeared: 1/10/14 J.UCS Self-learning Mobile Robot Navigation in Unknown Environment Using

More information

Chapter 5 Components for Evolution of Modular Artificial Neural Networks

Chapter 5 Components for Evolution of Modular Artificial Neural Networks Chapter 5 Components for Evolution of Modular Artificial Neural Networks 5.1 Introduction In this chapter, the methods and components used for modular evolution of Artificial Neural Networks (ANNs) are

More information

Genetic Programming of Autonomous Agents. Functional Requirements List and Performance Specifi cations. Scott O'Dell

Genetic Programming of Autonomous Agents. Functional Requirements List and Performance Specifi cations. Scott O'Dell Genetic Programming of Autonomous Agents Functional Requirements List and Performance Specifi cations Scott O'Dell Advisors: Dr. Joel Schipper and Dr. Arnold Patton November 23, 2010 GPAA 1 Project Goals

More information

Inducing Parameters of a Decision Tree for Expert System Shell McESE by Genetic Algorithm

Inducing Parameters of a Decision Tree for Expert System Shell McESE by Genetic Algorithm Inducing Parameters of a Decision Tree for Expert System Shell McESE by Genetic Algorithm I. Bruha and F. Franek Dept of Computing & Software, McMaster University Hamilton, Ont., Canada, L8S4K1 Email:

More information

Coalition formation in multi-agent systems an evolutionary approach

Coalition formation in multi-agent systems an evolutionary approach Proceedings of the International Multiconference on Computer Science and Information Technology pp. 30 ISBN 978-83-6080-4-9 ISSN 896-7094 Coalition formation in multi-agent systems an evolutionary approach

More information

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS In: Journal of Applied Statistical Science Volume 18, Number 3, pp. 1 7 ISSN: 1067-5817 c 2011 Nova Science Publishers, Inc. MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS Füsun Akman

More information

Genetic Algorithm for Finding Shortest Path in a Network

Genetic Algorithm for Finding Shortest Path in a Network Intern. J. Fuzzy Mathematical Archive Vol. 2, 2013, 43-48 ISSN: 2320 3242 (P), 2320 3250 (online) Published on 26 August 2013 www.researchmathsci.org International Journal of Genetic Algorithm for Finding

More information

Fast Efficient Clustering Algorithm for Balanced Data

Fast Efficient Clustering Algorithm for Balanced Data Vol. 5, No. 6, 214 Fast Efficient Clustering Algorithm for Balanced Data Adel A. Sewisy Faculty of Computer and Information, Assiut University M. H. Marghny Faculty of Computer and Information, Assiut

More information

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding e Scientific World Journal, Article ID 746260, 8 pages http://dx.doi.org/10.1155/2014/746260 Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding Ming-Yi

More information

Evolving SQL Queries for Data Mining

Evolving SQL Queries for Data Mining Evolving SQL Queries for Data Mining Majid Salim and Xin Yao School of Computer Science, The University of Birmingham Edgbaston, Birmingham B15 2TT, UK {msc30mms,x.yao}@cs.bham.ac.uk Abstract. This paper

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

Genetic Tuning for Improving Wang and Mendel s Fuzzy Database

Genetic Tuning for Improving Wang and Mendel s Fuzzy Database Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Genetic Tuning for Improving Wang and Mendel s Fuzzy Database E. R. R. Kato, O.

More information

CONCLUSION ACKNOWLEDGMENTS REFERENCES

CONCLUSION ACKNOWLEDGMENTS REFERENCES tion method produces commands that are suggested by at least one behavior. The disadvantage of the chosen arbitration approach is that it is computationally expensive. We are currently investigating methods

More information

GENETIC ALGORITHM with Hands-On exercise

GENETIC ALGORITHM with Hands-On exercise GENETIC ALGORITHM with Hands-On exercise Adopted From Lecture by Michael Negnevitsky, Electrical Engineering & Computer Science University of Tasmania 1 Objective To understand the processes ie. GAs Basic

More information

CHAPTER 5 ENERGY MANAGEMENT USING FUZZY GENETIC APPROACH IN WSN

CHAPTER 5 ENERGY MANAGEMENT USING FUZZY GENETIC APPROACH IN WSN 97 CHAPTER 5 ENERGY MANAGEMENT USING FUZZY GENETIC APPROACH IN WSN 5.1 INTRODUCTION Fuzzy systems have been applied to the area of routing in ad hoc networks, aiming to obtain more adaptive and flexible

More information

Navigation of Multiple Mobile Robots Using Swarm Intelligence

Navigation of Multiple Mobile Robots Using Swarm Intelligence Navigation of Multiple Mobile Robots Using Swarm Intelligence Dayal R. Parhi National Institute of Technology, Rourkela, India E-mail: dayalparhi@yahoo.com Jayanta Kumar Pothal National Institute of Technology,

More information

Evolutionary Algorithms. CS Evolutionary Algorithms 1

Evolutionary Algorithms. CS Evolutionary Algorithms 1 Evolutionary Algorithms CS 478 - Evolutionary Algorithms 1 Evolutionary Computation/Algorithms Genetic Algorithms l Simulate natural evolution of structures via selection and reproduction, based on performance

More information

System of Systems Architecture Generation and Evaluation using Evolutionary Algorithms

System of Systems Architecture Generation and Evaluation using Evolutionary Algorithms SysCon 2008 IEEE International Systems Conference Montreal, Canada, April 7 10, 2008 System of Systems Architecture Generation and Evaluation using Evolutionary Algorithms Joseph J. Simpson 1, Dr. Cihan

More information

A Genetic Algorithm for Multiprocessor Task Scheduling

A Genetic Algorithm for Multiprocessor Task Scheduling A Genetic Algorithm for Multiprocessor Task Scheduling Tashniba Kaiser, Olawale Jegede, Ken Ferens, Douglas Buchanan Dept. of Electrical and Computer Engineering, University of Manitoba, Winnipeg, MB,

More information

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Evolutionary Computation Algorithms for Cryptanalysis: A Study Evolutionary Computation Algorithms for Cryptanalysis: A Study Poonam Garg Information Technology and Management Dept. Institute of Management Technology Ghaziabad, India pgarg@imt.edu Abstract The cryptanalysis

More information

ANTICIPATORY VERSUS TRADITIONAL GENETIC ALGORITHM

ANTICIPATORY VERSUS TRADITIONAL GENETIC ALGORITHM Anticipatory Versus Traditional Genetic Algorithm ANTICIPATORY VERSUS TRADITIONAL GENETIC ALGORITHM ABSTRACT Irina Mocanu 1 Eugenia Kalisz 2 This paper evaluates the performances of a new type of genetic

More information

A Genetic Algorithm-Based Approach for Energy- Efficient Clustering of Wireless Sensor Networks

A Genetic Algorithm-Based Approach for Energy- Efficient Clustering of Wireless Sensor Networks A Genetic Algorithm-Based Approach for Energy- Efficient Clustering of Wireless Sensor Networks A. Zahmatkesh and M. H. Yaghmaee Abstract In this paper, we propose a Genetic Algorithm (GA) to optimize

More information

A Genetic Algorithm Approach to the Group Technology Problem

A Genetic Algorithm Approach to the Group Technology Problem IMECS 008, 9- March, 008, Hong Kong A Genetic Algorithm Approach to the Group Technology Problem Hatim H. Sharif, Khaled S. El-Kilany, and Mostafa A. Helaly Abstract In recent years, the process of cellular

More information

Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism

Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism in Artificial Life VIII, Standish, Abbass, Bedau (eds)(mit Press) 2002. pp 182 185 1 Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism Shengxiang Yang Department of Mathematics and Computer

More information

Mobile Robots: An Introduction.

Mobile Robots: An Introduction. Mobile Robots: An Introduction Amirkabir University of Technology Computer Engineering & Information Technology Department http://ce.aut.ac.ir/~shiry/lecture/robotics-2004/robotics04.html Introduction

More information

Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms.

Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms. Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms. Gómez-Skarmeta, A.F. University of Murcia skarmeta@dif.um.es Jiménez, F. University of Murcia fernan@dif.um.es

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

Evolutionary Linkage Creation between Information Sources in P2P Networks

Evolutionary Linkage Creation between Information Sources in P2P Networks Noname manuscript No. (will be inserted by the editor) Evolutionary Linkage Creation between Information Sources in P2P Networks Kei Ohnishi Mario Köppen Kaori Yoshida Received: date / Accepted: date Abstract

More information

Preprocessing of Stream Data using Attribute Selection based on Survival of the Fittest

Preprocessing of Stream Data using Attribute Selection based on Survival of the Fittest Preprocessing of Stream Data using Attribute Selection based on Survival of the Fittest Bhakti V. Gavali 1, Prof. Vivekanand Reddy 2 1 Department of Computer Science and Engineering, Visvesvaraya Technological

More information

GRASP. Greedy Randomized Adaptive. Search Procedure

GRASP. Greedy Randomized Adaptive. Search Procedure GRASP Greedy Randomized Adaptive Search Procedure Type of problems Combinatorial optimization problem: Finite ensemble E = {1,2,... n } Subset of feasible solutions F 2 Objective function f : 2 Minimisation

More information

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS 6.1 Introduction Gradient-based algorithms have some weaknesses relative to engineering optimization. Specifically, it is difficult to use gradient-based algorithms

More information

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems 4 The Open Cybernetics and Systemics Journal, 008,, 4-9 Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems K. Kato *, M. Sakawa and H. Katagiri Department of Artificial

More information

Optimal Design of the Fuzzy Navigation System for a Mobile Robot Using Evolutionary Algorithms

Optimal Design of the Fuzzy Navigation System for a Mobile Robot Using Evolutionary Algorithms International Journal of Advanced Robotic Systems ARTICLE Optimal Design of the Fuzzy Navigation System for a Mobile Robot Using Evolutionary Algorithms Regular Paper Abraham Meléndez 1, Oscar Castillo

More information

A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices

A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices Yaser Alkhalifah Roger L. Wainwright Department of Mathematical Department of Mathematical and Computer Sciences and Computer

More information

Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm

Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm 2011 International Conference on Software and Computer Applications IPCSIT vol.9 (2011) (2011) IACSIT Press, Singapore Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm Roshni

More information

Welfare Navigation Using Genetic Algorithm

Welfare Navigation Using Genetic Algorithm Welfare Navigation Using Genetic Algorithm David Erukhimovich and Yoel Zeldes Hebrew University of Jerusalem AI course final project Abstract Using standard navigation algorithms and applications (such

More information

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY 2001 41 Brief Papers An Orthogonal Genetic Algorithm with Quantization for Global Numerical Optimization Yiu-Wing Leung, Senior Member,

More information

Designing Radial Basis Neural Networks using a Distributed Architecture

Designing Radial Basis Neural Networks using a Distributed Architecture Designing Radial Basis Neural Networks using a Distributed Architecture J.M. Valls, A. Leal, I.M. Galván and J.M. Molina Computer Science Department Carlos III University of Madrid Avenida de la Universidad,

More information

An Improved Genetic Algorithm based Fault tolerance Method for distributed wireless sensor networks.

An Improved Genetic Algorithm based Fault tolerance Method for distributed wireless sensor networks. An Improved Genetic Algorithm based Fault tolerance Method for distributed wireless sensor networks. Anagha Nanoti, Prof. R. K. Krishna M.Tech student in Department of Computer Science 1, Department of

More information

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS Jim Gasvoda and Qin Ding Department of Computer Science, Pennsylvania State University at Harrisburg, Middletown, PA 17057, USA {jmg289, qding}@psu.edu

More information

A Combined Meta-Heuristic with Hyper-Heuristic Approach to Single Machine Production Scheduling Problem

A Combined Meta-Heuristic with Hyper-Heuristic Approach to Single Machine Production Scheduling Problem A Combined Meta-Heuristic with Hyper-Heuristic Approach to Single Machine Production Scheduling Problem C. E. Nugraheni, L. Abednego Abstract This paper is concerned with minimization of mean tardiness

More information

A Dynamical Systems Analysis of Collaboration Methods in Cooperative Co-evolution.

A Dynamical Systems Analysis of Collaboration Methods in Cooperative Co-evolution. A Dynamical Systems Analysis of Collaboration Methods in Cooperative Co-evolution. Elena Popovici and Kenneth De Jong George Mason University Fairfax, VA epopovic@gmu.edu kdejong@gmu.edu Abstract Cooperative

More information

Evolutionary Computation Part 2

Evolutionary Computation Part 2 Evolutionary Computation Part 2 CS454, Autumn 2017 Shin Yoo (with some slides borrowed from Seongmin Lee @ COINSE) Crossover Operators Offsprings inherit genes from their parents, but not in identical

More information

Genetic Algorithms. Kang Zheng Karl Schober

Genetic Algorithms. Kang Zheng Karl Schober Genetic Algorithms Kang Zheng Karl Schober Genetic algorithm What is Genetic algorithm? A genetic algorithm (or GA) is a search technique used in computing to find true or approximate solutions to optimization

More information

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem etic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem R. O. Oladele Department of Computer Science University of Ilorin P.M.B. 1515, Ilorin, NIGERIA

More information

Optimization of fuzzy multi-company workers assignment problem with penalty using genetic algorithm

Optimization of fuzzy multi-company workers assignment problem with penalty using genetic algorithm Optimization of fuzzy multi-company workers assignment problem with penalty using genetic algorithm N. Shahsavari Pour Department of Industrial Engineering, Science and Research Branch, Islamic Azad University,

More information

Towards Automatic Recognition of Fonts using Genetic Approach

Towards Automatic Recognition of Fonts using Genetic Approach Towards Automatic Recognition of Fonts using Genetic Approach M. SARFRAZ Department of Information and Computer Science King Fahd University of Petroleum and Minerals KFUPM # 1510, Dhahran 31261, Saudi

More information

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization 2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic

More information

1 Lab 5: Particle Swarm Optimization

1 Lab 5: Particle Swarm Optimization 1 Lab 5: Particle Swarm Optimization This laboratory requires the following: (The development tools are installed in GR B0 01 already): C development tools (gcc, make, etc.) Webots simulation software

More information

Lecture notes. Com Page 1

Lecture notes. Com Page 1 Lecture notes Com Page 1 Contents Lectures 1. Introduction to Computational Intelligence 2. Traditional computation 2.1. Sorting algorithms 2.2. Graph search algorithms 3. Supervised neural computation

More information

Classification of Concept-Drifting Data Streams using Optimized Genetic Algorithm

Classification of Concept-Drifting Data Streams using Optimized Genetic Algorithm Classification of Concept-Drifting Data Streams using Optimized Genetic Algorithm E. Padmalatha Asst.prof CBIT C.R.K. Reddy, PhD Professor CBIT B. Padmaja Rani, PhD Professor JNTUH ABSTRACT Data Stream

More information

A Fuzzy Local Path Planning and Obstacle Avoidance for Mobile Robots

A Fuzzy Local Path Planning and Obstacle Avoidance for Mobile Robots A Fuzzy Local Path Planning and Obstacle Avoidance for Mobile Robots H.Aminaiee Department of Electrical and Computer Engineering, University of Tehran, Tehran, Iran Abstract This paper presents a local

More information

Instant Prediction for Reactive Motions with Planning

Instant Prediction for Reactive Motions with Planning The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA Instant Prediction for Reactive Motions with Planning Hisashi Sugiura, Herbert Janßen, and

More information

Intelligent Risk Identification and Analysis in IT Network Systems

Intelligent Risk Identification and Analysis in IT Network Systems Intelligent Risk Identification and Analysis in IT Network Systems Masoud Mohammadian University of Canberra, Faculty of Information Sciences and Engineering, Canberra, ACT 2616, Australia masoud.mohammadian@canberra.edu.au

More information

LEARNING WEIGHTS OF FUZZY RULES BY USING GRAVITATIONAL SEARCH ALGORITHM

LEARNING WEIGHTS OF FUZZY RULES BY USING GRAVITATIONAL SEARCH ALGORITHM International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1593 1601 LEARNING WEIGHTS OF FUZZY RULES BY USING GRAVITATIONAL

More information

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li International Conference on Applied Science and Engineering Innovation (ASEI 215) Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm Yinling Wang, Huacong Li School of Power and

More information

A Framework for adaptive focused web crawling and information retrieval using genetic algorithms

A Framework for adaptive focused web crawling and information retrieval using genetic algorithms A Framework for adaptive focused web crawling and information retrieval using genetic algorithms Kevin Sebastian Dept of Computer Science, BITS Pilani kevseb1993@gmail.com 1 Abstract The web is undeniably

More information

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing Solving Traveling Salesman Problem Using Parallel Genetic Algorithm and Simulated Annealing Fan Yang May 18, 2010 Abstract The traveling salesman problem (TSP) is to find a tour of a given number of cities

More information

MODELLING DOCUMENT CATEGORIES BY EVOLUTIONARY LEARNING OF TEXT CENTROIDS

MODELLING DOCUMENT CATEGORIES BY EVOLUTIONARY LEARNING OF TEXT CENTROIDS MODELLING DOCUMENT CATEGORIES BY EVOLUTIONARY LEARNING OF TEXT CENTROIDS J.I. Serrano M.D. Del Castillo Instituto de Automática Industrial CSIC. Ctra. Campo Real km.0 200. La Poveda. Arganda del Rey. 28500

More information

Kanban Scheduling System

Kanban Scheduling System Kanban Scheduling System Christian Colombo and John Abela Department of Artificial Intelligence, University of Malta Abstract. Nowadays manufacturing plants have adopted a demanddriven production control

More information

4/22/2014. Genetic Algorithms. Diwakar Yagyasen Department of Computer Science BBDNITM. Introduction

4/22/2014. Genetic Algorithms. Diwakar Yagyasen Department of Computer Science BBDNITM. Introduction 4/22/24 s Diwakar Yagyasen Department of Computer Science BBDNITM Visit dylycknow.weebly.com for detail 2 The basic purpose of a genetic algorithm () is to mimic Nature s evolutionary approach The algorithm

More information

1 Lab + Hwk 5: Particle Swarm Optimization

1 Lab + Hwk 5: Particle Swarm Optimization 1 Lab + Hwk 5: Particle Swarm Optimization This laboratory requires the following equipment: C programming tools (gcc, make), already installed in GR B001 Webots simulation software Webots User Guide Webots

More information

1 Lab + Hwk 5: Particle Swarm Optimization

1 Lab + Hwk 5: Particle Swarm Optimization 1 Lab + Hwk 5: Particle Swarm Optimization This laboratory requires the following equipment: C programming tools (gcc, make). Webots simulation software. Webots User Guide Webots Reference Manual. The

More information

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY AUTOMATIC TEST CASE GENERATION FOR PERFORMANCE ENHANCEMENT OF SOFTWARE THROUGH GENETIC ALGORITHM AND RANDOM TESTING Bright Keswani,

More information

Image Processing algorithm for matching horizons across faults in seismic data

Image Processing algorithm for matching horizons across faults in seismic data Image Processing algorithm for matching horizons across faults in seismic data Melanie Aurnhammer and Klaus Tönnies Computer Vision Group, Otto-von-Guericke University, Postfach 410, 39016 Magdeburg, Germany

More information

Multi-objective Evolutionary Fuzzy Modelling in Mobile Robotics

Multi-objective Evolutionary Fuzzy Modelling in Mobile Robotics Multi-objective Evolutionary Fuzzy Modelling in Mobile Robotics J. M. Lucas Dept. Information and Communications Engineering University of Murcia Murcia, Spain jmlucas@um.es H. Martínez Dept. Information

More information

On Simplifying the Automatic Design of a Fuzzy Logic Controller

On Simplifying the Automatic Design of a Fuzzy Logic Controller On Simplifying the Automatic Design of a Fuzzy Logic Controller France Cheong School of Business IT, RMIT University, Melbourne, Victoria 3000, Australia. email : france.cheong@rmit.edu.au Richard Lai

More information

Evolutionary form design: the application of genetic algorithmic techniques to computer-aided product design

Evolutionary form design: the application of genetic algorithmic techniques to computer-aided product design Loughborough University Institutional Repository Evolutionary form design: the application of genetic algorithmic techniques to computer-aided product design This item was submitted to Loughborough University's

More information

V.Petridis, S. Kazarlis and A. Papaikonomou

V.Petridis, S. Kazarlis and A. Papaikonomou Proceedings of IJCNN 93, p.p. 276-279, Oct. 993, Nagoya, Japan. A GENETIC ALGORITHM FOR TRAINING RECURRENT NEURAL NETWORKS V.Petridis, S. Kazarlis and A. Papaikonomou Dept. of Electrical Eng. Faculty of

More information

Design of an Optimal Nearest Neighbor Classifier Using an Intelligent Genetic Algorithm

Design of an Optimal Nearest Neighbor Classifier Using an Intelligent Genetic Algorithm Design of an Optimal Nearest Neighbor Classifier Using an Intelligent Genetic Algorithm Shinn-Ying Ho *, Chia-Cheng Liu, Soundy Liu, and Jun-Wen Jou Department of Information Engineering, Feng Chia University,

More information

Behavior-Based Control for Autonomous Mobile Robots

Behavior-Based Control for Autonomous Mobile Robots Behavior-Based Control for Autonomous Mobile Robots Terry Huntsberger a and John Rose b Abstract The harsh nature of planetary surfaces introduces many new constraints into the types of control systems

More information

Genetic Algorithms and Image Search Pavel Mrázek

Genetic Algorithms and Image Search Pavel Mrázek Genetic Algorithms and Image Search Pavel Mrázek Department of Computer Science, Faculty of Electrical Engineering, Czech Technical University (»VUT), Karlovo nám. 13, 12135 Praha 2, Czech Republic e-mail:

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information

THE EFFECT OF SEGREGATION IN NON- REPEATED PRISONER'S DILEMMA

THE EFFECT OF SEGREGATION IN NON- REPEATED PRISONER'S DILEMMA THE EFFECT OF SEGREGATION IN NON- REPEATED PRISONER'S DILEMMA Thomas Nordli University of South-Eastern Norway, Norway ABSTRACT This article consolidates the idea that non-random pairing can promote the

More information

Complex behavior emergent from simpler ones

Complex behavior emergent from simpler ones Reactive Paradigm: Basics Based on ethology Vertical decomposition, as opposed to horizontal decomposition of hierarchical model Primitive behaviors at bottom Higher behaviors at top Each layer has independent

More information

Final Project Report: Learning optimal parameters of Graph-Based Image Segmentation

Final Project Report: Learning optimal parameters of Graph-Based Image Segmentation Final Project Report: Learning optimal parameters of Graph-Based Image Segmentation Stefan Zickler szickler@cs.cmu.edu Abstract The performance of many modern image segmentation algorithms depends greatly

More information

Classification Using Unstructured Rules and Ant Colony Optimization

Classification Using Unstructured Rules and Ant Colony Optimization Classification Using Unstructured Rules and Ant Colony Optimization Negar Zakeri Nejad, Amir H. Bakhtiary, and Morteza Analoui Abstract In this paper a new method based on the algorithm is proposed to

More information

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS NABEEL AL-MILLI Financial and Business Administration and Computer Science Department Zarqa University College Al-Balqa'

More information

Dynamic Robot Path Planning Using Improved Max-Min Ant Colony Optimization

Dynamic Robot Path Planning Using Improved Max-Min Ant Colony Optimization Proceedings of the International Conference of Control, Dynamic Systems, and Robotics Ottawa, Ontario, Canada, May 15-16 2014 Paper No. 49 Dynamic Robot Path Planning Using Improved Max-Min Ant Colony

More information

Fuzzy adaptive genetic algorithms: design, taxonomy, and future directions

Fuzzy adaptive genetic algorithms: design, taxonomy, and future directions Original paper Soft Computing 7 (2003) 545 562 Ó Springer-Verlag 2003 DOI 10.1007/s00500-002-0238-y Fuzzy adaptive genetic algorithms: design, taxonomy, and future directions F. Herrera, M. Lozano Abstract

More information

JHPCSN: Volume 4, Number 1, 2012, pp. 1-7

JHPCSN: Volume 4, Number 1, 2012, pp. 1-7 JHPCSN: Volume 4, Number 1, 2012, pp. 1-7 QUERY OPTIMIZATION BY GENETIC ALGORITHM P. K. Butey 1, Shweta Meshram 2 & R. L. Sonolikar 3 1 Kamala Nehru Mahavidhyalay, Nagpur. 2 Prof. Priyadarshini Institute

More information

Approach Using Genetic Algorithm for Intrusion Detection System

Approach Using Genetic Algorithm for Intrusion Detection System Approach Using Genetic Algorithm for Intrusion Detection System 544 Abhijeet Karve Government College of Engineering, Aurangabad, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad, Maharashtra-

More information

Distributed Optimization of Feature Mining Using Evolutionary Techniques

Distributed Optimization of Feature Mining Using Evolutionary Techniques Distributed Optimization of Feature Mining Using Evolutionary Techniques Karthik Ganesan Pillai University of Dayton Computer Science 300 College Park Dayton, OH 45469-2160 Dale Emery Courte University

More information

Imputation for missing data through artificial intelligence 1

Imputation for missing data through artificial intelligence 1 Ninth IFC Conference on Are post-crisis statistical initiatives completed? Basel, 30-31 August 2018 Imputation for missing data through artificial intelligence 1 Byeungchun Kwon, Bank for International

More information

Seismic regionalization based on an artificial neural network

Seismic regionalization based on an artificial neural network Seismic regionalization based on an artificial neural network *Jaime García-Pérez 1) and René Riaño 2) 1), 2) Instituto de Ingeniería, UNAM, CU, Coyoacán, México D.F., 014510, Mexico 1) jgap@pumas.ii.unam.mx

More information

AN EVOLUTIONARY APPROACH TO DISTANCE VECTOR ROUTING

AN EVOLUTIONARY APPROACH TO DISTANCE VECTOR ROUTING International Journal of Latest Research in Science and Technology Volume 3, Issue 3: Page No. 201-205, May-June 2014 http://www.mnkjournals.com/ijlrst.htm ISSN (Online):2278-5299 AN EVOLUTIONARY APPROACH

More information

GRANULAR COMPUTING AND EVOLUTIONARY FUZZY MODELLING FOR MECHANICAL PROPERTIES OF ALLOY STEELS. G. Panoutsos and M. Mahfouf

GRANULAR COMPUTING AND EVOLUTIONARY FUZZY MODELLING FOR MECHANICAL PROPERTIES OF ALLOY STEELS. G. Panoutsos and M. Mahfouf GRANULAR COMPUTING AND EVOLUTIONARY FUZZY MODELLING FOR MECHANICAL PROPERTIES OF ALLOY STEELS G. Panoutsos and M. Mahfouf Institute for Microstructural and Mechanical Process Engineering: The University

More information

Genetic Programming. and its use for learning Concepts in Description Logics

Genetic Programming. and its use for learning Concepts in Description Logics Concepts in Description Artificial Intelligence Institute Computer Science Department Dresden Technical University May 29, 2006 Outline Outline: brief introduction to explanation of the workings of a algorithm

More information

A COMPARATIVE STUDY OF FIVE PARALLEL GENETIC ALGORITHMS USING THE TRAVELING SALESMAN PROBLEM

A COMPARATIVE STUDY OF FIVE PARALLEL GENETIC ALGORITHMS USING THE TRAVELING SALESMAN PROBLEM A COMPARATIVE STUDY OF FIVE PARALLEL GENETIC ALGORITHMS USING THE TRAVELING SALESMAN PROBLEM Lee Wang, Anthony A. Maciejewski, Howard Jay Siegel, and Vwani P. Roychowdhury * Microsoft Corporation Parallel

More information

HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS

HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS An Undergraduate Research Scholars Thesis by DENISE IRVIN Submitted to the Undergraduate Research Scholars program at Texas

More information

Dynamic Control of Genetic Algorithms using Fuzzy Logic Techniques

Dynamic Control of Genetic Algorithms using Fuzzy Logic Techniques Dynamic Control of Genetic Algorithms using Fuzzy Logic Techniques Michael A. LEE Computer Science Department University of California Davis, CA 95616 lee@cnmat.berkeley.edu Hideyuki TAKAGI Computer Science

More information

Mutation in Compressed Encoding in Estimation of Distribution Algorithm

Mutation in Compressed Encoding in Estimation of Distribution Algorithm Mutation in Compressed Encoding in Estimation of Distribution Algorithm Orawan Watchanupaporn, Worasait Suwannik Department of Computer Science asetsart University Bangkok, Thailand orawan.liu@gmail.com,

More information

Partitioning Sets with Genetic Algorithms

Partitioning Sets with Genetic Algorithms From: FLAIRS-00 Proceedings. Copyright 2000, AAAI (www.aaai.org). All rights reserved. Partitioning Sets with Genetic Algorithms William A. Greene Computer Science Department University of New Orleans

More information