Adapting Gait Patterns using Evolutionary Algorithms and Intelligent Trial and Error

Size: px
Start display at page:

Download "Adapting Gait Patterns using Evolutionary Algorithms and Intelligent Trial and Error"

Transcription

1 Adapting Gait Patterns using Evolutionary Algorithms and Intelligent Trial and Error Fredrik Sævland Master s Thesis Spring 2016

2

3 Adapting Gait Patterns using Evolutionary Algorithms and Intelligent Trial and Error Fredrik Sævland 18th May 2016

4 ii

5 Abstract Using traditional optimization algorithms, the robot may be sensitive to changes in the environment. Adapting the robot is usually not a viable alternative due to the amount of processing power required to optimize a new solution for the changes in environment. In this thesis a new solution, based on [9], is proposed as a means to adapt the robot for changes in the environment. The technique builds a featurespace using an illumination algorithm called MAP-Elites, and performs search using Intelligent Trial and Error. Using these techniques, it should be possible to adjust the features of a currently evolved robot, and adapting instead of evolving for new changes. iii

6 iv

7 Contents 1 Introduction Goal Background Evolutionary Algorithms Terminology Process of Genetic Algorithm (GA) Generation Overfitting Evolutionary Robotics The Reality Gap MAP-Elites Illumination algorithms and optimization algorithms Intelligent Trial and Error Curse of Dimensionality Software Robdyn Sferes Limbo Required computational power Implementation Robot Controller system Using MAP-Elites and IT&E to adapt gait patterns Genome Operators and Parameters IT&E - Adapting through trial-and-error Stopping criteria Simulator Placing obstacles Fitness-function: Measuring performance Feature-function: Defining the behavior Visualization Defining the environment v

8 5 Results MAP-Elites archive Alpha cutoff Fitness correlation of the Y-axis Fitness correlation of the X-axis Location of fitness Building the MAP-Elites archive Parameters Adapting using IT&E How IT&E searches the archive Solutions found by IT&E Experiment 1 - IT&E Adaptation vs. Best Individual Experiment 2 - IT&E Adaptation vs. Best Individual trained with obstacles Conclusion Discussion Performance of the MAP-Elites archive Performance of the IT&E-optimization MAP-Elites and IT&E vs. Traditional optimization Future Work Conclusion Running the software vi

9 List of Figures 2.1 Flowchart of GA Pseudocode of the general procedure of evolutionary algorithm[14] Plot of Equation Illustration of a simple single-point crossover Example of what overfitting might look like in statistical classification Pseudocode of a simple MAP-Elites algorithm[24] Composing of physical robot Photo of physical robot Schematic of physical robot Lower- and upper-bands of parameters Graph of controller-function Genome Simplified flowchart of IT&E Simulation with 0_150_15 configuration Simulation with 0_0_0 configuration Testing archive Testing phenotypes from archive in simulation after running archived value for two runs Comparison of adding influence of randomness into the sampling Archive at generation Position of test points Areas of high fitness Phenotype (12, 32) exploiting the back legs to lift the front legs MAP-Elites archive in early generations MAP-Elites archive through generations Best fitness per generation Mean of archive per generation Phenotypes in archive, per generation gatest.cpp - The parameters IT&E searching for the best solution, see Table 6.3 for data Scatterplot of solutions from two scenarios Scatterplot of solutions from two scenarios vii

10 5.14 Plot of incline and decline from 0.19 to 0.21 and to With forced stopping criteria of MaxIterations set to IT&E vs. Best performance individuals Mean fitness of ELITE/ELITE_O vs. IT&E viii

11 Preface Acknowledgments I would like to thank all the people that helped me, Matthew Greening for helping me proofread the thesis, and a special thanks to my supervisor Kyrre H. Glette for great support and guidance. ix

12 x

13 Chapter 1 Introduction When using robots for everyday tasks, it is important for them to be able to adapt to unpedictable situations in the chaotic real world. One of the major drawbacks of Evolutionary Robotics (ER) is that they often involve expensive computation that may be sensitive to changes in the environment they are evolved in. Using what is referred to as optimization algorithms, they typically optimize for a single environment with a single solution to perform well in the given scenario. In [24] Mouret et al. proposes a different approach to evolving robots, which is referred to as illumination algorithms, that illuminates performance rather than just optimizing for it. The general idea is to map out a feature-space where each best performing "elite" is mapped onto a multi-dimensional matrix according to the characteristics of how the robot performs as well as a performance-measure which can tell the user how well a solution with certain characteristics can become. The algorithm used for this application, called MAP-Elites, is used to create such a feature-space where each adjustment of features have a differently evolved solution. This feature-space, map or archive, can then be used to search for a solution with different characteristics defined by the user, such as height of lift in the legs or time the legs spend in contact with the ground. The user can then adjust these features to adapt or change. This approach is an alternative to evolving new strategies every time the robot encounters a different challenge. With something as simple as adjusting some features, the robot may overcome the new challenge by only adjusting the features of an already evolved solution. Using the IT&E-algorithm 1 created in Robots can adapt like animals [9], it is possible to search this MAP- Elites archive in a fast and efficient way using simple trial-and-error to look in areas of the feature-space (archive) to narrow in on optimal solutions. This thesis takes the method proposed in [9] to see if it is similarity viable for other applications. Cully et al. found very good results in their implementation of intelligent adaptation, where the robot will adapt new gaits according to the damage or disability of a joint. This application uses the same idea, only applied to a slightly different area. This may prove that the technique is flexible and applicable to a broader area in Evolutionary 1 Intelligent Trial and Error 1

14 Robotics. All of this while being just as effective and robust as when adapting for broken or disabled legs. An important aspect of the technique of adapting the gait pattern is that, by using a pre-computed archive, the computation of the archive is separated from the search for solutions and away from the application on the robot. This means that the archive can be developed on powerful computers, then loaded onto a smaller embedded system, where the relatively lightweight IT&E-algorithm can find the optimal solution in a short time. This will not only make the systems faster, but also make a physical robot cheaper, lighter and more flexible, as the need for large batteries and powerful on-board processing-power dissipates. This will make the robots a generally better solution, but also make the robots attractive for commercial uses where cost and ease of use is important. Instead of requiring the user to do massive computations for the robot to learn to walk, they can instead have a low-cost system where the robot will download a pre-computed archive from a central server that does all the heavy processing required by evolutionary algorithms. 1.1 Goal The goal of this thesis is to prove that MAP-Elites and IT&E (Intelligent Trial and Error) can be applied to other areas of Evolutionary Robotics, as well as perform adaption of gait patterns wile the environment changes in terms of incline/decline of the terrain, and obstacle shapes and sizes. The idea is that the same MAP-Elites archive, which defines a set of features, should provide a solution where it can adjust the features, like leg lift and leg sweeping 2, to maximize the performance in everything from flat ground to rocky terrain with a steep incline. 2 The movement of the legs back and forwards 2

15 Chapter 2 Background 2.1 Evolutionary Algorithms Evolutionary Algorithms (EAs) is a type of population-based metaheuristic [6] optimization algorithm that uses the mechanics of biological evolution as inspiration. Evolutionary algorithms are a subset of evolutionary computation [1], which contains classes such as, genetic algorithms, genetic programming and evolutionary strategies [2]. This research will only take genetic algorithms into consideration. The idea of evolutionary algorithms stems from the natural evolution, where the species that make up the world have adapted strength and fitness against the environment to solve complex issues by the natural process of selection. Evolutionary algorithms try to capture this strength of evolution as a way of problem solving, which is by using trial-and-error, encouraging the spread of good individuals and eliminating the weak individuals that doesn t offer a good solution Terminology As evolutionary algorithms is inspired by biological evolution, the algorithms share the same terminology. In this thesis we will use many of these terminologies to describe different parts of the algorithms, as well as the different results. Evolutionary algorithms can be summed up as an environment with a population of individuals that strive for survival and reproduction, where factors are determined by the fitness of each individual. Fitness is measured by a fitness-function which evaluates how well the individual can perform the particular task that is optimized for. This fitness represents the chance of survival and reproduction for the individual, which implies that for each generation of the population, the strong traits will remain, while the weak traits will dissipate in a metaphorical survival of the fittest. The inspiration from biology also carries the terminology over to EA, meaning that the metaphorical representation in the EA uses the same name to describe the process as used in biological evolution. The genetic algorithm 3

16 uses the name individual to represent a solution that is encoded by the genome, or chromosome. Genetic Algorithms (GA) takes these individuals through the process of evolution Process of Genetic Algorithm (GA) Figure 2.1: Flowchart of GA 1 BEGIN 2 INITIALISE population with random candidate s o l u t i o n s ; 3 EVALUATE each candidate ; 4 REPEAT UNTIL (TERMINATION CONDITION i s s a t i s f i e d ) DO 5 1 SELECT parents ; 6 2 RECOMBINE p a i r s of parents ; 7 3 MUTATE the r e s u l t i n g o f f s p r i n g ; 8 4 EVALUATE new candidates ; 9 5 SELECT i n d i v i d u a l s f o r the next generation ; 10 OD 11 END Figure 2.2: Pseudocode of the general procedure of evolutionary algorithm[14] Evolutionary algorithms and all the underlying classes share the idea behind the process of natural selection, or survival of the fittest. The process is based on a given population of individuals that struggles for the limited resources in the given environment. In order to survive and spread their genomes, the individuals will compete for survival and reproduction. Genetic algorithm (GA) is a class under evolutionary algorithm that uses this principle to optimize for a solution. Figure 2.1 shows the process of 4

17 evolution in such a genetic algorithm. The algorithm starts of by initializing a random population that goes through the steps of evaluation, selection, crossover and mutation, which will give offsprings for a new generation, where the process repeats until it is terminated by a termination criteria, or the user. Evaluation Evaluation is what is referred to as a fitness-function. This is the requirement of survival for the individuals, which is the measure of how well an individual performs and what improvements means. When performing an evaluation, the individual is asked to perform a test; the results from which determines the fitness of this individual, or how strong it is to do the task that the user wants to optimize for. One example of an evaluation is to maximize the fitness function 2.1 so that the function outputs the highest result with a genome consisting of the value x. If for instance the genome of an individual is the value that is inserted into the fitness function would be decoded ( ) evaluating the fitness function as = 92.71, which is a fairly good individual. On the other hand an individual with a genome of, for instance, will be decoded ( ), and evaluated in the fitness function as = 20.79, meaning that the fitness of this individual is lower than the previous. This is because the values of the function desires genomes representing values as close to 0 as possible, which can be seen in Figure 2.3. In the case of the fitness function 2.2 the fitness of the individual will be in correlation with the value of the decoded genome. This squaring of values will cause the evaluation to prefer as large values of x as possible. Figure 2.3: Plot of Equation 2.1 5

18 f (x) = 100 x2 100 (2.1) f (x) = x 2 (2.2) Selection Parent selection, mate selection or survivor selection are methods of preferring better individuals with higher fitness values to reproduce and keeping the population at a predetermined population. The process of selection can be done in many ways, where the goal is usually selecting the individuals that performs well, like in nature where the stronger individuals with higher fitness have a bigger advantage when it comes to reproduction and survival. Typical selection strategies may be tournament, truncation, linear ranking and exponential ranking. See [4] for a comparison of selection schemes. Crossover Crossover is the first step of reproduction, in where the selected parents mixes their genes into what is called an offspring, which will belong to the new generation. There are different methods of crossover that can be done, such as single-point crossover (see Figure 2.4), double-point crossover and Partially Matched Crossover (PMX) for permutation representation. See [14] for details about crossover strategies. The job of the crossoverprocedure is to create the new generation of individuals, which may bring the good traits from the parents over to the next generation. This crossover is a metaphorical way of inheriting traits from the parents. Figure 2.4: Illustration of a simple single-point crossover 6

19 Mutation After the reproduction-procedure has applied a crossover to create an offspring, the offspring will be mutated in order to introduce some element of randomness into the new generation, create diversity and help the population escape a local maxima. The individuals are optimizing towards a peak performance that only increases in the local area, and is not searching for a new and better maxima. Mutation can be a simple procedure of flipping some bits in a binary genome, or it can change one or more values in the genome into another value. Mutation is trying to mimic the biological mutation that happens in nature, where the phenotypes gain some unique traits in the genome. This is great for evolution where new traits are explored, and could either be good for the individual or prove fatal. Similarity in genetic algorithms, mutation will introduce new information into the population, whether this is good or bad for the individuals Generation When the Evolutionary Algorithm has gone through one iteration of the evolutionary process of evaluation, selection, crossover and mutation, it is referred to as one generation of the population. The next time the Evolutionary Algorithm starts a new iteration, the population will have the offspring produced from the individuals from the previous generation. As the parents gets older or inferior, they will naturally die off for the new generations to replace them Overfitting Overfitting is when a model describing the pattern of data also describes noise and variance. If the model is too specific to the exact training data, then the sensitivity to changes can lead to loss of accuracy on out-of-sample data. [5] In order to prevent overfitting the model needs to be somewhat generic, which can be achieved by simply training the model less, or design an environment where it is difficult to let the genetic algorithm overfit to a specific niche. In the application of adapting a gait pattern, overfitting can for instance be caused by the gait being specifically suited for obstacles placed in some configuration. Instead of evolving robustness of obstacles, the robot is evolving the perfect leg-placement instead. By doing something as simple as randomizing the placement of obstacles, overfitting for obstacles can be prevented. Another approach to prevent overfitting is called early stopping [27], where the training is stopped when the out-of-sample error increases. 7

20 Figure 2.5: Example of what overfitting might look like in statistical classification 2.2 Evolutionary Robotics Evolutionary Robotics (ER) is a relatively unexplored field of evolutionary computation. Only in more recent years has it started to become more mainstream in the field of robotics and artificial intelligence. The concept of ER is to create autonomous robots using the techniques of Evolutionary Algorithms, where the robots develop their own skills to adapt and maximize performance in a given environment when performing some kind of task. Evolutionary Robotics typically use techniques such as Neural Networks and Genetic Algorithms. [26] Evolutionary Robotics are rapidly gaining popularity due to their independence from human influence, where the optimization for a given task may be defined very abstractly and unspecific, enabling the robot to find solutions that may be untraditional and unintuitive for a human. These unintuitive solutions are often based on exploitation of the environment. Although Evolutionary Algorithms are often very robust against local maxima in applications such as ER, the solutions found are rarely the optimal one due to the large and complex search space. ER works in a similar manner to the other types of Evolutionary Algorithms. The user creates a population of robots that go through a process of evolution, where the strong spread their genes through reproduction, while the weak die off. Fitness 8

21 is measured in how well the robot can perform the desired task, such as walking or jumping, whether the goal is speed, robustness or both. They typically define how the robot can do a task through a genome, then defines the fitness-criteria which benchmarks how well the robot is doing the task. As the trial-and-error nature of evolutionary algorithms require a large amounts of evaluations, this is unfeasible on a physical robot because of the time and effort required. This is why ER is typically done in a simulation, and transferred over to the real world when a solution is found. [7] However this transfer proposes a new set of challenges, which is referred to as The Reality Gap The Reality Gap When working with Evolutionary Robotics, training the robot in a simulator is often the only viable alternative, as doing real world evaluations can take too much time and effort. The issue that arises from doing evaluations in a simulation is the disparity between the behavior in the simulation and the behavior in the real world. Simulated environments often have unrealistic aspects, inaccuracies and weaknesses that can be exploited by the robot. Since the robot will always try to maximize the fitness, one such exploitation will lead the robot into a deceptional performance, where the algorithm is optimizing for a behavior that is not possible in reality. This issue of discrepancy between simulation and reality is what is referred to as The Reality Gap. [17 19, 23] Koos et al. in [19] proposes a solution, called The Transferability Approach where two main objectives are optimized via a multi-objective, pareto front MOEA [10] that evolves what is referred to as a transferability-measure, as well as the fitness-measure of the robot. Previous approaches to the reality gap has been reality-based optimization approaches with optimization fully or partially on a real robot, simulation-based optimization with the entire process in simulation, or robot-in-the-loop simulation-based optimization that evolves in the simulation and does transfer experimentation during the evolution. [19] 2.3 MAP-Elites MAP-Elites is a technique described in [24]. This type of algorithm is heavily based on genetic algorithms (GAs) [2], which is used as a part of Intelligent Trial and Error (IT&E) as described in [9]. The MAP-Elites is what was coined by Mouret et al. as an illumination algorithm, which differs from traditional optimization algorithms, such as the NSGA-II [13], a standard Multi-Objective Evolutionary Algorithm (MOEA), or simple single-objective evolutionary algorithms. MAP-Elites is presented as a new illumination algorithm that searches for the highest performing performing solution at each point in the space. The search-space in itself can be high-dimensional, where as the feature-space 9

22 is in a lower dimension defining only the characteristics that is sought after. [24] MAP-Elites is conceptually easy. The user will design a fitness-function 1 BEGIN 2 INITIALIZE map of e l i t e s with N dimension 3 REPEAT FOR ( I i t e r a t i o n s ) DO 4 IF i t e r < G THEN 5 x random_selection ( ) 6 ELSE 7 x random_selection (X) 8 x random_variation ( x ) 9 b f e a t u r e _ d e s c r i t o r ( x ) 10 p performance ( x ) 11 IF P ( b ) = NULL OR P ( b ) < p THEN 12 P ( b ) p 13 X( b ) x 14 OD 15 RETURN feature performance map ( P and X) G Number of random solutions X Map of elites x Elite from map X x Offspring from x p Performance or fitness P Performance map b Feature descriptor Figure 2.6: Pseudocode of a simple MAP-Elites algorithm[24] f (x) that evaluates the fitness of an individual x, similar to a standard evolutionary algorithm. One such fitness-function, or performancemeasure, could be how fast the robot can move, or how far the robot can get in a set amount of time or iterations. The user will also define the N dimensions that will define the feature space of the archive, such as how high the robot lifts the legs, or how long each leg is in contact with the ground as used in [9]. The dimensions are chosen based on what the user requires to be defined in the fitness-performance map, as well as the computational resources required (see Section 2.5). The MAP- Elites algorithm will search for the best solution for each of the cells in the N dimensional fitness-performance map, each cell being defined by the desired resolution of the archive (e.g. 128 by 128). For instance, the MAP-Elites algorithm can search for the best individual that has the specific feature of lifting the legs and sweeping the legs a set amount, then placing them in their respective feature-describing position in the map. There are two types of spaces in the MAP-Elites algorithm, the first is the search space, where all possible values of x resides. The second space is the fitness-performance map, P, where evaluated individuals are mapped out by their feature descriptor, b, that is describing the best phenotype that has the characteristics required for that cell in the fitness-performance map, or archive. x is described by the genome or genotype, as well as well phenotype P x. f (x) is the fitness function that describes the performance of each x, where as a feature function b(x) is what is describing the value in the fitness-feature space in the N-dimension. This can for instance be a calculation of b(x) that measures how long the legs touch the ground and report back some value to describe this feature, or to run the b(x) in the real- 10

23 world to measure the power consumption of the robot. Another example could be to simply return parts of the genome as a direct correlation to the features. This is useful when the genome represents values to a controllerfunction (see Section 4.1.1). Process As mentioned, the MAP-Elites algorithm, shown in Figure 2.6 is relatively simple. MAP-Elites starts with an initialization process by generating G random genomes, and running the performance and feature evaluation on each of them. The random genomes are then mapped out in the archive in their appropriate positions (calculated by b(x)). If there are multiple individuals in the archive that have the same feature description, the one with the highest fitness-value will dominate over the other individuals competing to occupy the cell. The procedure then follow the steps of: Choosing a random cell in the map and creating an offspring from the individual in that cell via mutation and/or crossover. Then the algorithm will evaluate the fitness and behavior of the new offspring and place it in the appropriate cell. If the cell is previously occupied, the individual with the highest performance will dominate the other one and will be the occupier of the cell. This process continues until some termination criteria, such as time, is reached. It can also terminate at a certain generation or when the resulting archive has a certain characteristics Illumination algorithms and optimization algorithms Traditional search algorithms, also known as optimization algorithms, have a goal of returning the highest solution in the search space. Illumination algorithm is a terminology coined by Mouret et al. [24] as a different way of finding good solutions to a problem. Illumination algorithms finds solutions that are located in the search space, then maps out the highest performing solution by the features and characteristics of that particular solution. The user then adjusts the features of the robot by finding a solution in the feature-space that corresponds to the selection. Another advantage of the illumination algorithm is that the fitness at each point in the feature-space can give an overview of how the fitness changes along with the features of the robot. This focus on returning the highest-performing solution at each point will find areas of high fitness, illuminating the fitness potential. In more recent years, there has been a change where the promoting of diversity has been a higher priority than the focus on maximizing performance. Some of these algorithms have been designed with the goal of returning a repertoire of solution that expands across related objectives. [8, 20, 21, 24] 11

24 2.4 Intelligent Trial and Error Intelligent Trial and Error (IT&E) is an algorithm that makes the robot adaptable to changes by seaching the behavior-performance map generated by the MAP-Elites algorithm. IT&E will use the knowledge of the behavior-performance map, or archive, generated by MAP-Elites to search for compensatory behaviors to alternative objectives by adjusting the features of the solution. This prior knowledge can be used to select solutions in the feature-dimensions based on what is expected to perform well, and find a new solution in a very short time. Guided by this map, the IT&Ealgorithm tests different behavior with different feature-characteristics, then starts predicting where the best solutions in the map are located by using the assumption that alternative objectives well suited for a new solution are found by adjusting features. The users of the robot will only need to describe the dimensions of the map as features along with describing a fitness measure, through the requirements of the MAP-Elites algorithm. [9] In [9] the IT&E-algorithm is used as a way to search in a behavioral featurespace over how the robot moves the legs, such as for how long each leg touches the ground per dimension, and with the fitness measure of how far the robot can get. If one of the legs then gets damaged, the robot can search in the behavioral feature-space to find solutions that can compensate well, slowly narrowing in on the features that describe a behavior where the use of the damaged leg is minimized. This solution is found by trial-and-error, as the name suggests. Process The IT&E will simply perform trial-and-error based on the confidencevalue of the individuals. The archive will start out with a low overall confidence value, and the IT&E begins with the most promising value in the archive, i.e. the cell that holds the individual with the highest fitnessvalue. The individual is evaluated in the desired environment and is then assigned a confidence value. If it evaluates to a good performance, then the confidence is set to a high level, whereas evaluations that show poor performance will be assigned a lower level of confidence. The algorithm continues this "select-test-update process", as they call it in [9], and the process does not stop until a termination criteria is reach. Hopefully by then the IT&E has found a compensatory behavior that performs well in the environment that it tested for. 2.5 Curse of Dimensionality Curse of Dimensionality, described in [30], is a term introduced by Bellman in [3] to describe the problem caused by the exponential increase of volume in association with adding extra dimensions to Euclidean space. This means that as more dimensions are added to the data set, the more sparsely 12

25 the finite data will be distributed in the space. This has a range of implications in the field of artificial intelligence, due to the difficulty of gathering sufficient data along with the required processing power to generate such data. This relates to the MAP-Elites archive in that the resolution and dimensions of the archive has a serious impact on the quality, implied by the massive increase in required computational power. In the case of this application, the performance-feature map is a dimension of 128x128, meaning that it is mapping out the two features, lift and sweep, as two dimension with 128 samples each. This two-dimensional space makes it easy to visualize the map, as well as minimizing the required computation needed for a fully populated archive with solutions for every variations of the desired features. Just adding one more feature would increase the feature-space by 128 times, making the computation 128 times more intensive in order to provide the same density of data in the feature-space. Reducing the feature-space has some advantages, such as reducing computational complexity, increasing density of data, and keeping it in three dimensions or less. This can make it easier to visualize, as well as help the users in understanding the data. In [9] the computation of a sixdimensional feature space required roughly two weeks on one multi-core computer, making a resolution of 128 have = samples given that the archive is fully populated. Having one less dimension will reduce the feature-space by = which results in =

26 14

27 Chapter 3 Software OpenSceneGraph OpenDynamicsEngine (Older version) C++ C++11 gcc (Debian ) Robdyn (Tested on 2/5/16) Sferes2 (Tested on 2/5/16) Limbo (Older version) 3.1 Robdyn Robdyn is the framework in which the simulation is built upon. This is a wrapper around the popular OpenSceneGraph 1 graphics toolkit, together with the Open Dynamics Engine 2 as its physics engine counterpart. This wrapper is made to make the simulation of robots a much easier task. Robdyn is not a framework, as opposed to Sferes2 (Section 3.2) or Limbo (Section 3.3). Robdyn being a wrapper, it helps to define two components of a robot simulation: First it is the robot, where a robot is built using primitive geometric shapes and joints with the properties of real servos, which in this case is made to mimic the Dynamixel AX The second component of the simulation is the environment. This is the virtual world where the defined robot resides. The environment can be tweaked by parameters, such as adding geometric shapes as obstacles, creating a sloped plane, or changing the gravity. The environment is defined and created with the configuration of the robot inserted into the simulator. It is then run for a set amount of steps until it exits. Robdyn is developed by the Resibots Team as a part of their research on evolutionary robotics. This is the same simulator that is used in for instance [9]. 1 Homepage OpenScenegraph: openscenegraph.org 2 Homepage Open Dynamics Engine: ode-wiki.org 3 Dynamixel Manual: 15

28 3.2 Sferes2 Sferes2 is a high-performance C++ framework for evolutionary computation, and is the back-end for the implementation of MAP-Elites [24]. The Sferes2-framework is the component that handles the evolutionary computation. The user defines the algorithm in a template, then runs it through Sferes2, or uses one of the many supplemented algorithms such as NSGA-II [13], ɛ-moea [12] or CMA-ES [16]. Sferes2 is divided into three part: The framework, the optional modules, and the user experiments. The framework provides the binary string genome, as well as a real number genome with the supplemented operators: Gaussian mutation, Uniform mutation, Polynomial mutation [11], and SBX 4 crossover [11]. The user is free to define any additional modules for Sferes2. [25] 3.3 Limbo Limbo is designed as a lightweight framework for Bayesian optimization and is mainly created with the purpose for research with novel algorithms. Built upon the same modularity and flexibility of Sferes2, it can either be used with the supplemented modules, or the user can define their own. In the case of IT&E it is added to the Limbo-framework as an external module, although Limbo was written by the same authors. [22] 3.4 Required computational power Performance in terms of computational speed is not what makes MAP- Elites a better alternative, being that the initial computation is equally demanding as traditional optimization algorithm. Training an archive for 2300 generations, such as the one used in Section 5.1, will require 2-3 days of computation on a standard desktop computer, and computation on a two core server will produce approximately 350 generations in 24 hours. This is with a configuration of 300 individuals and a step increment of (See Section 3.1). Doing these kinds of calculations on-board a robot, or during operation, is regarded as infeasible. This is where the strength of archiving comes in, where the computation can be done once on an external computer, such as a computing cluster. 4 Simulated binary crossover 16

29 Chapter 4 Implementation Based upon the work of Mouret et al. in [24] and Cully et al. in [9]i, the implementation will use these algorithms in a different area in order to see if MAP-Elites (Section 2.3) and IT&E (Section 2.4) perform just as well in the application of adapting gait patterns. In this implementation of MAP-Elites and IT&E the approach is different in terms of application, and implementation. Robots can adapt like animals uses a six-dimensional space where each dimension represents the proportion of time the ith leg spends time in contact with the surface, which is useful for an application where the goal is to adapt to disabled legs, where the IT&E-algorithm can search in solutions that offer varying degrees of legcontact with the ground. [9] As mentioned in Section 2.4, the usage of these techniques depends on describing the features that are desired in searching for behavior. This thesis takes upon the challenge of adapting a gait pattern using the techniques described in Chapter 2 which involves some changes, as described in Section Robot The robot used in this implementation is based on a robot developed using evolutionary algorithms itself, as described in [29]. The robot is being built in the simulator using simple geometric shapes, that are linked together with joints called motors, or servos. Figure 4.1 illustrates each individual component of the robot. The head is what is called the main body. When running the simulation the graphical interface will create a tracer from that body, as illustrated by the red dot in Figure 4.1. The reason why the head is the main body is mainly due to simplifications in the design, as the head is the first part that is created, and the robot is designed around this part. The main body is only used as a reference of the position of the robot in the virtual space. Looking at 4.1 the joints are numbered in the order of creation when running the build-process of the robot. The robot has four legs with two joints each. The upper joints (1, 3, 5, 7) have a sweeping-movement, 17

30 meaning that the servos on those joints move in a plane horizontal to the ground. This movement is referred to as sweep. The lower joints (2, 4, 6, 8) have a dihedral movement, which means that the robot lifts the joints from the ground. This is referred to as lift. The middle joints, between the head and mid, and mid to tail, do a sweeping-motion like the upper joints of the legs. Some simplifications are done in the design, such as using rectangles instead of capped cylinders for the body parts. This should not have an impact on the functionality of the robot, but does make it much easier to create the robot. The robot, also referred to as robot 4 in [29], has the measurements described in Figure 4.3, which is replicated in the simulation. The robot has some rotated joints, meaning that the legs are angled forwards on the front legs, with φ indicating the dihedral rotation, and ψ indicating the horizontal rotation around the Y-axis in a euclidean space, where Y-axis is the upwards direction relative to the ground. The back legs of the robot are slightly less rotated. The physical robot (Figure 4.2) is not used in this implementation, but the physical robot is still used as a reference for a well-performing robot. Future work may include a transfer to the real world on the real robot (See Section 6.1.4) Modified image from Real-World Reproduction of Evolved Robot Morphologies: Automated Categorization and Evaluation by Samuelsen et. al[29] Figure 4.1: Composing of physical robot Controller system One of the key points of having the robot fully adapt is to let the robot develop the open loop gait pattern itself, as opposed to predefining and parameterizing an already working gait pattern. There are two reason for this. The first reason is because different behaviors, such as high lift, may 18

31 From Real-World Reproduction of Evolved Robot Morphologies: Automated Categorization and Evaluation by Samuelsen et al.[29] Figure 4.2: Photo of physical robot require a different style of walking to keep the performance at a reasonable level, and balance is key for a robot well suited to difficult terrain. The second reason for fully adapting the gait pattern is so that the MAP-Elites algorithm can be benchmarked as an alternative to optimization algorithms such as NSGA-II [13]. The robot, as described in Section 4.1, has a total of 10 joints with two for each leg and two in the body. These joints are servos that are controlled by Equation 4.1. This control-system is based on the system used in [15], using a similar controller-function to parameterize the open-loop controller. The controller-function is modified for an additional parameter f, making the robot capable of adjusting the oscillation-speed of a joint in order to increase the stability in demanding situations such as extreme lift in the legs, or climbing a steep incline. The controller function takes in the parameters α, θ, β and f, which are defined by the genome (see Section 4.2.1). α defines the amplitude of the phase, meaning that a higher value of alpha will lead to more movement in the joint. θ is the offset of the phase, meaning that θ-value of 0.5 will offset the phase by half a period, which leads to the phase being opposite of a phase having θ-value of 0 or 1. The θ-parameter can give asynchronous joints in the open loop control-system. β is the offset of the angle, meaning that the value will move the position where the phase will operate between. β-value of 10 can for instance make the servo phase between 30 and 10, whereas with β-value of 0, the servo would phase between 20 and 20. The last parameter is f, which controls the frequency of the phasing. Lower values of f results in slower movement of the joint, where as a high frequency will cause the joint to move faster. It is important to note that the 19

32 From Real-World Reproduction of Evolved Robot Morphologies: Automated Categorization and Evaluation by Samuelsen et al.[29] Figure 4.3: Schematic of physical robot frequency is limited by the speed of the AX-12 servos, in the real world as well as the simulated ones in Robdyn. This means that the phase will usually has a limited reaction, especially with a large α-value. This results in the delay smoothing out the angle, making the phase more sinusoidal than what the controller-function actually equates to. The search-space of the MAP-Elites algorithm can be reduced by introducing symmetry to the legs. (see Section 4.2.1) The parameters of the robot are defined within bands. These bands are fairly arbitrary in terms of value, and are adjusted to maximize mobility, whilst preventing the body pats intersecting (clipping) each other and colliding in a real-world scenario. Lower Upper Amplitude/α 0 40 Phase-offset/θ 0 1 Angle-offset/β Frequency/ f 0 2 Spread/s 0 40 Figure 4.4: Lower- and upper-bands of parameters c α,θ,β, f (t) = αtanh(4sin( f π t + θ)) + β (4.1) Alpha cutoff threshold Alpha cutoff threshold is what is often called a dead zone, meaning that values below a certain threshold are ignored. In order to make the robot 20

33 Figure 4.5: Graph of controller-function capable of disabling joints, the controller-function has what is called an alpha cutoff threshold, meaning that α-values below the value of 5.0 will be interpreted as the joint being fully disabled and not moving during the loop. The first reason for this is give the robot the advantage of controlling this feature, where it may find an advantage in disabling some of the joint. Another reason is due to very low α-values causing needless twitching, resulting in additional wear on real-world servos, as well as reducing the stability of the robot in general. 4.2 Using MAP-Elites and IT&E to adapt gait patterns In [9], IT&E is used as a way of adapting to damage. This is done by searching in a behavioral-performance map that describe the solutions dimensions represents how long the ith leg is in contact with the surface. In order to make the robot capable of adapting and traversing difficult terrain with obstacles, the gait pattern needs to be adjusted in terms of how high the robot lifts the legs, along with how far it sweeps one leg in front of the other. These features are simply described as lift and sweep. Having only two features means low computational complexity, and allows the fitness-performance map to be visualized in two dimensions. Adding more complexity makes it much more difficult to calculate a good result, making it more difficult to visualize and understand the generated archive from the MAP-Elites algorithm. 21

34 4.2.1 Genome The controller-function parameters (Section 4.1.1) for each of the joints are represented in the genome, and this is what makes up a genotype for the robot. The genome is made up of 20 real-number values from 0 to 1, in the EvoFloat 1 -representation for Sferes2, which is a genome using 32-bit floating point values. The robot evolves by adjusting the parameters as ratios between the lower- and upper-band of the robot (see Figure 4.4), with a percentage from 0% to 100% represented by the real number values 0 to 1. The genome is simplified in order to reduce the search space of the robot. By making the assumption that stable gait patterns have the legs move in symmetry, the genome can be reduced by 40%, removing the need for the genome to define 4 additional joints. Instead the right legs are made identical to the left legs, only at the opposite phase, reducing the joints controlled from 10 to 6 along with the parameters from 10 3 = 30 to 6 3 = 18, which is a 40% reduction. The 20 values of the genome consist of 18 joint-specific parameters, three parameters per servo, for the six controlled joints. This is listed in Figure 4.6. The two last values in the genome are referred to as global parameters, which are parameters applied to all the joints. Frequency ( f ) is a global value in order to reduce the search space, meaning that there is a tradeoff in controlling the whole robot as means to increase stability, instead of increasing the genome from 20 to 26 (23.08% increase) by adjusting each of the joints individually. The second global parameter is referred to as spread (s). This is also the result of a simplification, where the value will adjust how far the legs are spread from each other, as opposed to letting the robot adjust this with the controller-function. This is a simplification both in terms of making the behavior more defined, as well as making it easier to define the upper- and lower-band of the joints. Not having the spread of the legs limit the boundaries that the controller can operate within, making the Spread-value independent from the upper- and lower-band Operators and Parameters MAP-Elites implements many of the same operators and parameters as regular genetic algorithms. This application uses the same crossover-, mutation- and representation-type as [9]. Crossover-type: SBX [11] Mutation-type: Polynomial [11] Representation: Real numbers Crossover-rate: 0.25 Mutation-rate: 0.1 η-m: 15 η-c: 10 Population size: 300 Parameter range:

35 H: Head-joint 1: Left/right front-upper joint 3: Left/right rear-upper joint T: Tail-joint 2: Left/right front-lower joint 4: Left/right rear-lower joint α (alpha): Amplitude of phase θ (theta): Offset of phase β (beta): Offset of angle S: Spread of legs f : Frequency G: Global H T G α S/18 θ f /19 β Figure 4.6: The Genotype SBX - Simulated Binary Crossover and Polynomial Mutation Simulated Binary Crossover (SBX) was proposed in [11]. This crossovertype was designed with respect to the one-point crossover properties in binary-coded GA, where the average of the decoded parameter values stay the same before and after the operation. The SBX takes one parameter η- c, which defines the probability for creating solutions that are very similar to the parents, while small values allow for more distant solutions. The polynomial mutation is coupled with SBX. The polynomial mutation takes an index parameter η-m that controls the polynomial distribution of the mutation. 4.3 IT&E - Adapting through trial-and-error The IT&E will read in an archive-file that is generated from the MAP-Elites algorithm. This archive-file is used to reconstruct the archive from the Sferes2 MAP-Elites, to the Limbo framework. The archive-file is a clear text.dat-file that is one line per cell, with 23 values per line, separated by spaces. The first value is the index of the file, which is discarded in this implementation. This is followed by two feature-descriptors defining the position in the archive, one for lift-values and one of sweep-values. The last section of the line is the genome that consists of 20 elements. With the archive being loaded in the Limbo framework, the process of IT&E can start. The IT&E is a module written for Limbo, and the internal functionality is isolated from the user. The user will have to implement the evaluation-function, however. This evaluates the performance of the candidate that is being tested. In this implementation, the fitness- 23

36 evaluation is similar to the fitness-function used in the MAP-Elites algorithm, where the genome is simulated in Robdyn, and the performance is measured in how far the robot can get in a limited amount of iterations Stopping criteria A stopping criteria will manage the trial and error by telling when the algorithm has found a satisfactory result, or has exhausted the predetermined amounts of tries that is allowed. The stopping criteria used in this application are the same ones used in the implementation of [9], namely the MaxPredictedValue- and the MaxIterations-criteria. Figure 4.7: Simplified flowchart of IT&E MaxPredictedValue This criterion is defined as finding satisfactory results. In order to invoke the MaxPredictedValue, the evaluated performance - the perceived value - must be within a range of the estimated value. This is what is referred to as 24

37 the archived value, the fitness-performance that was archived by the MAP- Elites algorithm. The MaxPredictedValue operates with the parameter called ratio, that is set to 0.9 in the results presented in Chapter 5. This ratio is the tolerance of how well the newly adapted value has to perform in comparison to the best observed value in the archived solutions, checking if the best observed value is greater than f itnessvalue ratio. MaxIterations The second criterion is the MaxIterations-condition, which is a limit of the amount of trials that the IT&E can perform before it has exhausted the search for a solution that reaches the MaxPredictedValue-criterion. If the n- iteration parameter of the MaxIteration is set to 20, as done in Chapter 5, the IT&E will iterate over the procedure 20 times. If the MaxPredictedValuecriterion is not reached within those 20 trials, the IT&E-procedure will end, returning the best observed value. Typically 20 iterations should be enough explore key areas of the feature-space and find a decent result. 4.4 Simulator Placing obstacles Obstacles are used in the simulation to both encourage more robust gait patterns, as well as creating different scenarios in which the adaptationcapabilities can be tested. By defining the two parameters size and amount in the simulation, it should be possible to adjust the difficulty of the environment. The obstacles are flat blocks, which is a simplification done Figure 4.8: Simulation with 0_150_15 configuration to make it easier to define in a simulator, as well as making it easier to replicate in the real-world. The blocks are being placed out on in a small area of the environment, with 25

38 Figure 4.9: Simulation with 0_0_0 configuration a two-dimensional Gaussian distribution in order to make the obstacles increment in size and quantity as the robot moves forward, and in this way make it possible for the weaker individuals in the evolutionary algorithm to gain some fitness, and make it more challenging as the fitness increases and the robot manages to move further ahead. / / 2D g a u s s i a n f l o a t b s i z e = a * exp ( ( (pow ( ( x xc ), 2) / ( 2 * pow( s, 2 ) ) ) + (pow ( ( y yc ), 2) / ( 2 * pow( s, 2 ) ) ) ) ) ; } ode : : Object : : p t r _ t b (new ode : : Box ( * env, Eigen : : Vector3d ( x, y, bsize /2 + ( tan ( t i l t ) * x ) ), 10, b s i z e * 4, b s i z e * 4, b size ) ) ; b >s e t _ r o t a t i o n ( 0. 0 f, t i l t, 0. 0 f ) ; Fitness-function: Measuring performance Performance is measured through the Robdyn-wrapper by using the head of the robot, described in Section 4.1. The simulator will constantly report the position of the robot as it moves in the environment. By measuring how far the robot has managed to move in the forwards direction, it can be used as a simple and robust fitness-measure, where fitter individuals are capable of moving further ahead in a limited amount of iterations. When measuring performance, the measuring is done within a set amount of iterations. The reason why performance is measured in terms of how far the robot gets in a set amount of iterations instead of time, is due to time not being constant in the simulation. The step increments discussed in Section are used both to adjust the simulation speed as well as the quality of 26

39 the physics simulation, making the the adjustment of speed nonviable due to the altering of the results. By using a set amount of iterations instead, the measurements will be unaffected by the speed the simulation is run at, and each step increment will always count up to the limit of iterations, whether it is a fast simulation with coarse increments of the physics, or a slow and accurate simulation with small increments. The value of fitness measured by the fitness-function is used arbitrarily and only in reference to other fitness-values, which means that there isn t any real world analogy for how well the robot performs, or any benchmark of speed. The fitnessfunction in this implementation can be summarized by measuring how far the robot gets in a set amount of iterations Feature-function: Defining the behavior The feature-function in this implementation is very simple. The featurefunction is an average α (amplitude) of all the lifting joints as the first feature, and the sweeping joints as a second feature. Since α is defined as a parameter in the genome (see Section 4.2.1), the feature-function is simply taking the four values from the genome, calculating the average, and applying it as the definition of the behavior. / / g a t e s t. cpp std : : vector <float > data ; data. push_back ( ( ind. gen ( ). data ( 6)+ ind. gen ( ). data ( 1 2 ) ) / 2 ) ; data. push_back ( ( ind. gen ( ). data ( 9)+ ind. gen ( ). data ( 1 5 ) ) / 2 ) ; Note the values 6, 12, 9 and 15, which can then be seen in Figure 4.6 as the α of joint 1 and 3 (front and rear upper joint), and joint 2 and 4 (front and rear lower joint), which translates to the sweep- and lift-motion of the robot Visualization The simulation has the graphical visualization as an option in order to reduce unnecessary computation when training the archive, done in Section 5.1. Due to the graphical visualization requiring additional processing power, as well as tying the simulation speed to the frame rate of the rendering. The graphical visualization is disabled for building the MAP-Elites archive, and is only applied when necessary, such as when inspecting a phenotype or alternatively when wanting to inspect the IT&Eoptimization, Defining the environment In running a simulation, the following items need to be defined: Genotype/Genome Tilt of ground Count of obstacles 27

40 Size of obstacles Step increments The first parameter that needs to be input into the simulation is the genotype that makes up the phenotype being created and simulated. This is an array of floating point values, which defines the parameters to controller functions for each joint. This parameter is described in more detail in Section Tilt is the parameter that defines the slope of the terrain, which rotates the ground plane on which the simulated robot stands upon. Size and count defines the roughness of the terrain in which the difficulty of the simulation can easily be adjusted by increasing or decreasing these values. Step increments is a technical parameter, and adjusts the resolution of the simulation, or the physics engine to be specific. A value of will simulate the robot twice as fast as 0.004, but will lack the precision and stability of a lower step increment value. This value is to compensate for accuracy and computational intensity. Increasing the step increments from to can cut training-time in half, which is very significant if trying to compute a multi-dimensional MAP-Elites archive, where the Curse of Dimensionality (Section 2.5) is applicable. Multiple passes One issue with doing evolutionary algorithms in a simulation is that the robot may exploit the weaknesses of the simulation in order to maximize fitness. Such an issue was discovered in this implementation of MAP-Elites, where the individuals in the archive had a measured fitness-performance that was not possible to replicate. This is evident in Figure 4.10 where archive is the measured performance upon creation, and perceived is the effort to replicate the fitness in an identical simulation. As Figure 4.10 show there is a large degree of falseness/inaccuracies to the fitness of the individuals, due to the archive being completely misguided by the few individuals that managed to exploit the simulation. Unfortunately the cause of this exploitation was not found, due to it being relatively rare in the millions of evaluations done during the construction of the archive. The temporary solution for the problem was to perform the evaluation in two passes with a small element of randomness involved, then take the result with the lowest fitness. By doing this, the chances of such an exploitation is virtually non-existent. This, however, at a cost of essentially making the evaluation twice as computationally expensive. The clear improvement can be seen in Figure Perceived value is when the archived solution is evaluated in the simulation to replicate the same fitness-value. Influence of randomness In order to create unique sampling of test-data for the results, an influence of randomness is added to the evaluations. The randomness is created by slightly tilting the terrain by a random factor. This small change creates 28

41 Phenotype (64,32) (64,64) (64,96) (63, 32) (63,64) (63,96) Archive: Perceived: F F F Delta: F F F F: Robot flipped during simulation, and fitness was void Figure 4.10: Testing phenotypes from archive in simulation (64,32) (64,64) (64,96) (63, 32) (63,64) (63,96) Archive: Perceived: Delta: Figure 4.11: Testing phenotypes from archive in simulation after running archived value for two runs what Edward Lorenz called The Butterfly Effect 2, where a tiny influence will have a large impact in the larger scale. Such a small and insignificant factor of tilt will make sure that every evaluation done by the simulator is unique like it would be in the chaotic real world

42 (a) ELITE before adding influence of randomness (b) ELITE after adding influence of randomness Figure 4.12: sampling Comparison of adding influence of randomness into the 30

43 Chapter 5 Results 4. The results are divided into two components. The first component is the MAP-Elites archive, and the second component is the IT&Eoptimization being done using the same archive that was discussed as the first component. The goal of this chapter is to provide the proof required in building a conclusion to whether or not the IT&E works for this type of application, as well as helping to understand what benefits are related to this approach. 5.1 MAP-Elites archive Figure 5.1: Archive at generation 2300 The results from the MAP-Elites algorithm, discussed in Section 2.3, will be analyzed in this Section as the first component of the results. The resulting archive is shown in Figure 5.1, whee this archive is the subject for analysis and testing. The purpose of analyzing the outcome from the MAP- 31

44 Figure 5.2: Position of test points Elites algorithm is to better understand some aspects of the result, how well it is working, along with the strengths and weaknesses of different individuals. Using the figures of the visualized archives, it is then easy to notice the interesting aspects and features of the archive, and to analyze the particular individuals in areas of interest, to further investigate the cause of performance a The simulation is done by choosing the individual of interest, then running it in Robdyn under the same conditions that it was trained in. The analysis done in this section is done using the 0_0_0 scenario (see Table 6.2), the same condition that the archive was trained under Alpha cutoff The first interesting feature in the archive is the dark line at the bottom, where the fitness is rapidly drops to virtually zero. This is observed to be the effect from the cutoff-threshold of the α (amplitude), which is discussed in Section 4.1.1, where any alpha-values below a certain threshold results in the joint being disabled. In order to investigate this area of low fitness, and why the thresholds have such an impact on the fitness of the individual, the individual at the position (127, 0), as well as some of the neighboring individuals are observed and evaluated in the simulator. Individual (127, 0) is located in the absolute bottom-right of archive visualized in Figure 5.2, and is one of the individuals in the archive that has the lowest fitness-value. Looking at the simulation of the poorly performing individual (127, 0), as well as the surrounding neighbors, it is clear that the lack of lift in the legs causes the robot to shuffle around on the spot, pushing itself around using the joints in the body and the upper joints of the legs. The robot 32

45 does not have a determined movement in the forwards direction, and any minor forms of fitness from these particular individuals can be disregarded as noise from the robot shuffling around and by random chance gaining fitness. The dark area at the bottom points to a feature where a joint on all four legs are disabled, due to the average of the alpha-values being below the given threshold. This disabling of these essential joints causes the robot to perform poorly Fitness correlation of the Y-axis Watching the fitness in correlation to the Y-axis introduces a few interesting aspects of the archive. The fittest individuals do not seem to be heavily influenced by the Y-axis, and as soon as the legs are out of the alpha cutoff area, the robot immediately starts gaining performance. This evidently means that the robot is capable of developing a well-performing solution, regardless of the dihedral movement of the legs. In the area after the alpha cutoff threshold, the archive quickly start producing well performing individuals. The fitness then tapers off as we move up the archive, until it reaches another, larger, area that produces individuals with comparatively good fitness. Looking at Figure 5.3, the high fitness areas are marked. Even though there are areas where the fitness is generally higher, the fluctuations within those areas are still very high within the close neighbors. The localized areas of high fitness are a natural consequence of how the MAP-Elites algorithm works, and the areas may be merged together, or overtaken, by a new maxima found in later generations of the archive. Figure 5.3: Areas of high fitness 33

46 Analysis 1 In order to observe if there are any differences in terms of behavior and performance as the average amplitude of the lower joints changes, several individuals have been simulated to see how they behave compared to each other. With individual (100, 41) which is archived as one of the best performing individuals, there is a clear and well performing gait with moderate lift in the legs. As a compensation to the lower lift in the legs, this individual uses twisting of the body in order to maximize the lifting, as seen in Figure 5.4. Analysis 2 By observing two individuals on an equal X-axis, in correlation to the area of high fitness on the Y-axis, the behavior may give some insight into why there are some areas which perform better than other. Phenotype (50, 96) and individual (100, 96) both seem to have a synchronized and well performing gait patterns that are capable of giving the robot fast movement, which translates to high fitness. The fitness does not seem to be due to the specific features of these individuals, but rather that the individuals are based on a different maxima, making the robot use a trait that gives good performance. In this analysis there are signs of individual (50, 96) substituting the lack of movement in its legs with movement in the body-joints as well, which means that the robot is twisting the headand tail-joints as a compensation too low sweeping-motion in the upper joints of the legs. Analysis 3 Individual (100, 125) is one example of a individual that is not performing as well as the neighbors, despite being in a generally good area. This can change over the next few generations as this discrepancy may be due to the later generations not having found a replacement individual that has the features needed to occupy the cells from an older generation. Simulating this individual, it is clear that there is a determined gait pattern, but that it is stumbling due to the legs not being perfectly in synchronization, and with the lifting of the legs being high, it will easily throw itself off course and diverge off to the sides, meaning that there is lost fitness Fitness correlation of the X-axis The X-axis seems to have a much stronger correlation to fitness compared to the Y-axis of the archive. With small values in the average amplitude of upper joints, the fitness is generally very low. The robot then starts to slowly gain fitness as the average amplitude increases. Some areas, such as the high fitness area tested in Section Analysis 3, will gain fitness more rapidly than areas with other values of average amplitude of the lower joints, which is what is causing the areas of high fitness illustrated in Figure 5.3. The difference in these areas of high fitness may be closely related to 34

47 the difference in generations, and the MAP-Elites algorithm narrowing in on a new maxima. This may imply and that it should be possible for the lower area to gain fitness in a similar way as the other areas of high fitness. Analysis 1 Simulating individual (1, 27), along with its neighboring individuals, shows that the gait pattern is determined, but weakened by the lack of sweeping movements in the legs. This makes the steps very small, with the body joints doing most of the movement to compensate, giving the robot some instability as the twisting of the body will easily throw the robot off course. Analysis 2 In order to figure out what causes the slower gain of fitness in correlation of the X-axis, the individuals (6, 32), (12, 32) and (96, 32) are simulated to see how low fitness individuals compare to high fitness individuals on the same lift-value. Starting off with individual (6, 32) it is clear that the issues found in analysis 1 with individual (1, 27) still apply, where the individual will be constrained by the lack of sweeping-motion in the legs, which causes the body-joints to do most of the work, making the gait unstable. Individual (12, 32) shows a gradual improvement by moving the work from the body-joint over to the legs, thereby gaining back the ability to lift the front legs by moving the opposite back leg forwards, making the robot fall backwards and twisting the body to lift the front legs, as can be seen in Figure 5.4. Individual 96, 32 is one of the individuals with the overall highest fitness-value, and the simulation shows that the robot uses the low lift to its advantage, creating long gaits with minimal lift. This creates a gait that is stable and fast, although it may be sensitive to obstacles. Figure 5.4: Phenotype (12, 32) exploiting the back legs to lift the front legs 35

48 (a) Gen. 5 (b) Gen. 10 (c) Gen. 50 (d) Gen. 100 Figure 5.5: MAP-Elites archive in early generations Location of fitness The individuals with high fitness are in general very spread around the archive, which indicates that the MAP-Elites training works well, providing a diverse selection of individuals with good fitness in the featurespace. Having the results very spread means that the robot should be able to adapt the features well, and be generally very robust to changes that can be overcome by searching in the feature-space. The most fit individuals are however concentrated in some specific areas, which are assumed to be characteristics where the MAP-Elites algorithm is expanding a new maxima within a certain area of the feature-space. 5.2 Building the MAP-Elites archive The purpose of analyzing the MAP-Elites archive transforms through the generations is to better understand how the archive evolves, along with how the proposed method performs in terms of required time and computational power. Figure 5.6 illustrates how the archive improves through the generations of the MAP-Elites algorithm described in Section 2.3. The first 10 generations will output a Gaussian-like distribution of around 3000 elements. At this point the algorithm mainly occupies the cells with any offspring produced by the algorithm. The highest fitness value at the 10th generation is at ~0.45, with some individuals having negative fitness. This is caused by the low population of the archive, and a lack of cell competition. Although the fitness is poor at this stage, the best individual rapidly gains performance in the early generations up to around generation 50, illustrated in Figure 5.7. As the MAP-Elites algorithm iterates from generation 50 through to 500, 36

49 (a) Gen. 100 (b) Gen. 300 (c) Gen. 500 (d) Gen (e) Gen (f) Gen Figure 5.6: MAP-Elites archive through generations there is very little improvement in terms of the fittest individual, but there are clear signs that the mean fitness of the archive has imporoved as the poorly performing individuals are dominated and replaced by the ones with better fitness. This is shown in Figure 5.8, where the mean steadily improves, even though the increase of the fittest individual tapers off. The archive also begins rapidly filling up, and is close to reaching the maximum at approximately generation 500, seen in Figure 5.9b. When the archive starts to get populated with better individuals, the new offspring also has a tougher challenge in dominating their feature-defined cell in the archive, motivating the evolution for higher fitness in order to replace the old occupants of the cells Parameters Building the archive requires very little in terms of defined parameters. Figure 5.10 is the parameter definition for Sferes2. The first parameter is to define dimensions, behav_dim, which are the feature-space dimensions that will be mapped in the archive. epsilon is a threshold for dominating a cell. It is not used in this implementation.. behav_shape defines the resolution of the archive. 128x128 is used for this application, and the value depends on how fine adjustments the user needs to do, where a higher 37

50 Figure 5.7: Best fitness per generation Figure 5.8: Mean of archive per generation resolution can give finer adjustments. init_size is G in Figure 2.6, and is the initial population that is applied to the archive, and size is the batch sizes that will be kept as population. nb_gen is a termination criterion for when the archive reaches a certain generation. This can usually be set to some large value if the user wants to terminate the computation by choice. dump_period is defining how often the MAP-Elites should produce an archive-file. This limit is to prevent the MAP-Elites algorithm from outputting too much data, as the archives can become and the algorithm will output about 20 MB per generation for a 128 by 128 archive. Parameters min and max defines the range that the EvoFloat genome will operate within. For this application, the range is between 0 and 1, representing a range (see Section 4.2.1). cross_rate and mutation_rate to influence the occurrence of crossover and mutation, these values adjusted for what gives the best fitness-gain in the current application. For eta_m, eta_c, mutation_type, and cross_over_type, see Section

51 (a) Up to Gen (b) Up to Gen. 500 Figure 5.9: Phenotypes in archive, per generation 5.3 Adapting using IT&E With the MAP-Elites archive built it can then be applied to the robot, where it is possible to search for an optimal solution using the IT&Ealgorithm described in Section 3.3. To demonstrate the effectiveness of the IT&E-algorithm, the robot will be given a range of different scenarios where it must compensate for changes in the environment by using the proposed technique of trial and error. Observing the solutions that IT&E finds for different scenarios, then comparing the solution with the best individual, called ELITE. This is done in order to see if IT&E-optimization can contribute to fitness-gain by adapting the gait. The IT&E-algorithm will also be compared to the best individual optimized in a scenario with obstacles in the environment. This individual is called ELITE_O. 39

52 s t r u c t Params { s t r u c t ea { SFERES_CONST s i z e _ t behav_dim = 2 ; SFERES_CONST double epsilon = 0 ; / / ; SFERES_ARRAY( s i z e _ t, behav_shape, 128, ) ; } ; s t r u c t pop { / / number o f i n i t i a l random p o i n t s SFERES_CONST s i z e _ t i n i t _ s i z e = ; / / s i z e o f a b a t c h SFERES_CONST s i z e _ t s i z e = ; SFERES_CONST s i z e _ t nb_gen = ; SFERES_CONST s i z e _ t dump_period = ; } ; s t r u c t parameters { SFERES_CONST f l o a t min = 0. 0 f ; SFERES_CONST f l o a t max = 1. 0 f ; } ; s t r u c t e v o _ f l o a t { SFERES_CONST f l o a t c r o s s _ r a t e = f ; SFERES_CONST f l o a t mutation_rate = 0. 1 f ; SFERES_CONST f l o a t eta_m = f ; SFERES_CONST f l o a t eta_c = f ; SFERES_CONST mutation_t mutation_type = polynomial ; SFERES_CONST c r o s s _ o v e r _ t cross_over_type = sbx ; } ; } ; [... ] typedef gen : : EvoFloat <20, Params> gen_ t ; typedef phen : : Parameters <gen_t, GaitOpt <Params >, Params> phen_t ; typedef eval : : P a r a l l e l <Params> e v a l _ t ; / / f o r SMP typedef boost : : fusion : : vector < s t a t : : Map<phen_t, Params >, s t a t : : B e s t F i t <phen_t, Params> > s t a t _ t ; typedef modif : : Dummy<> modifier_t ; typedef ea : : MapElites <phen_t, eval_t, s t a t _ t, modifier_t, Params> ea_t ; Figure 5.10: gatest.cpp - The parameters How IT&E searches the archive The process of finding the solution starts with the initial position with the value of highest confidence, which in the first iteration is the individual that has the best archived fitness. In the evaluation listed in Table 6.3, the environment is different from the environment that the archive was trained in, where the archive was built using scenario 0_0_0, and the new environment is the scenario 0_150_15. The new scenario contains obstacles, meaning that the individuals in the archive may behave differently than when they were archived with their fitness performance-measure. The IT&E is searching in the feature-space for individuals in the archive, who work well in scenario 0_150_15. In the first two iterations of the evaluation in Table 6.3, illustrated by Figure 5.11, the robot flipped over and the fitness was set to 0. On the third iteration the IT&E-optimization found a working solution with a fitness 40

53 Figure 5.11: IT&E searching for the best solution, see Table 6.3 for data of by evaluating the individual in the simulator, which is regarded as mediocre compared to the original fitness of The fourth iteration is another improvement over the previously best solution, and with a fitness of means the IT&E-optimization has now found a solution with an acceptable amount of fitness. The optimization will keep on looking for a better solution until one of the two stopping criteria, MaxPredictedValue or MaxIterations (See Section 4.3.1), are met. When the IT&E-optimization is on iteration 8 it finds a new improvement over the previous best, and since this solution meets the stopping criteria of MaxPredictedValue, the process terminates and the solution, with a fitness of , is chosen as the final result. The fitness is 88% adaptability in relation to the best individual in the archive, meeting the MaxPredictedValue-criterion, and is selected as the output of this optimization. This value is found in the archive by looking up the features described in Table

54 5.3.2 Solutions found by IT&E The first procedure in evaluating the performance of IT&E is to look at the outcome that is produced for each scenario, and investigate the causes for each selection. The IT&E will be performed in the same manner as described in Section 5.3.1, where each result is one evaluation. Scenario 0_0_0 This is the configuration that the MAP-Elites archive was trained with, so it is assumed that the fitness will be close to the best fitness measured in the archive (see Figure 5.7). Due to the lack random obstacles, the evaluations will be similar to each other and robust, as every run in the simulation will be virtually identical. Tilt 0, Count 0, Size 0 Avg. Lift Avg. Sweep Fitness Table 5.1: Solutions found by IT&E for 0_0_0 The IT&E algorithm chooses the same individual every run, as seen in table 5.1, which is individual (107, 68). This is to be expected due to the deterministic environment in the simulation. With a mean fitness of around 1.32 there is an adaptability of about 91%, with the robot performs almost as well as the best individual in the archive. Examining a single run of IT&E-optimization, it is observed that in some cases the algorithm did start out with the highest performing individual, but the robot flipped during the evaluation, which lead the optimization to the another well performing individuals. This shows that there will always be some discrepancies when simulating the archived values. The reason why the best individual failed in these tests are due to the individual not being as robust, and slight variation in the simulation, such as the step increments (see Section 4.4.5), may cause the robot to act differently in comparison to the evaluation done when the robot was evaluated in the process or building the archive. This shows that the individual is more dependent on circumstances, rather than building fitness on a robust gait. The IT&E provides an extra measure of robustness by testing solutions 42

55 and discarding them if they do not prove to be as expected, ensuring that the solutions are robust to minor changes in the environment. This also checks if the solution was the result of exploitation, meaning that the IT&Eoptimization may aid in crossing the reality gap (See Section 6.1.4). Scenario 0_150_15 This configuration introduces some obstacles, requiring the robot to have a better features that allows the legs to move over the obstacles. The gait must also prevent the robot from stumbling or flipping due to obstacles which may cause stability issues. The assumption of these optimizations are that the robot will require some amount of lift in the legs, along with the IT&E providing more variety in the solutions found. This is due to the introduction of the stochastic component of obstacles in the simulation, providing its own influence of randomness (see Section 4.4.5). The results in Table 5.2 points towards the IT&E preferring solutions with Tilt 0, Count 150, Size 15 Avg. Lift Avg. Sweep Fitness Table 5.2: Solutions found by of IT&E for 0_150_15 a significant amount of lift. The solution with the lowest amount of lift is also the weakest solution found by the IT&E-optimization, although the difference may be small. By using Figure 5.1 it can be assumed that the MAP-Elites algorithm seems to prefer solutions with some amount of leg-lifting regardless, and that stronger solutions in the archive generally have enough significant lift to perform well in the environment with the obstacles. The sweeping-motion in the legs is less significant. This may, however, be due to the body-joint replacing the upper, sweeping joints in the legs, as discussed in Section It may also suggest that moving the legs forward is not as significant to overcome the obstacles, given that the fitness of the individual is sufficient with the lack of average sweep in the legs. 43

56 Scenario 0_50_30 In this configuration the amount of obstacles is reduced to 1 3, but the size of the obstacles are doubled. This creates an environment with fewer and larger blocks. The results shown in Table 5.3 point towards the same Tilt 0, Count 50, Size 30 Avg. Lift Avg. Sweep Fitness Table 5.3: Solutions found by IT&E for 0_50_30 uncertainty in solutions as in the previous results for scenario 0_150_15, as shown in Table 5.2. The results are visualized in Figure 5.12 together with the previous scenario to show how the results differ, and how the robot changes in behavior as the obstacles becomes fewer and larger in size. In this 0_50_30 scenario, the results clearly have a bias towards higher lift, except for two outlier samples which can be seen in the bottom of Figure These outliers, both with a sweep-value of 0.66 and lift-value of 0.11, amount to fitness-values of 0.79 and 0.90, which in the data set for Figure This is below the average for this type of obstacle scenario (Table 5.6) compared to the other observed evaluations. This low fitness is likely caused by random chance in the placement of the obstacles, which happened to cater for a solution with moderate sweep and low lift. It s not uncommon to find outliers in the results, due to the environment having a large impact on the outcome of the simulation, where one misplaced rock may throw the robot off balance. The reason why many of the results also prefer a higher sweeping-value may be due to simplification in the simulation and that stepping over the obstacles and using them as support may be preferred over standing on the obstacles. The reason for this could be that the support helps the robot gain traction, whilst standing on objects may cause the robot to slip and lead to instability. In the real world, the roughness of real obstacles, such as rocks, will give better traction for the legs than what is given in this simulation, and stepping over an obstacle may not be practical in the real world with more complex obstacles with organic shapes. Some of the solutions found by IT&E do not require as much sweep, such as the outliers in solution number 3, table

57 Scenario 01_150_15 By adding 150 obstacles at the size of 15, as well as adding an incline of 0.1 radians (5.72 ), the scenario will cause a greater challenge for the robot as the use of IT&E may be essential for providing a feasible gait pattern. Using Figure 5.13 as a reference, there are no apparent differences in types Tilt 0.1, Count 150, Size 15 Avg. Lift Avg. Sweep Fitness Table 5.4: Solutions found by IT&E for 01_150_15 of solutions found for the 01_150_15 in comparison to the solutions from scenario 0_150_15. Based on the lack of change, as seen in Figure 5.13, it can be assumed that the obstacles have too much of a significant impact for the robot to be influenced by the incline in elevation. The only significant change seems to be a slight decrease in fitness. Comparison of incline and decline Walking on an incline or decline may be challenging for a robot, and can cause a gait pattern optimized for flat ground to be unstable or have decreased performance. For instance, a steep incline may cause gaits with high lift to throw the robot off balance, whilst gaits with high sweep may cause the legs to lose grip due to aggressive movement. The robot is tested in scenarios with a variety of 11 levels of incline and 11 levels of decline evaluated using the IT&E-optimization. The optimization-result of each scenario is used as the sampled data to measure how the incline and decline varies the features of the robot. The variations in the two scenarios are used as an influence of randomness (see Section 4.4.5) in order to create a better sampling. The tilt of the terrain ranges from 0.19 to 0.21 with a step of 0.02 for each sample, resulting in a total of 11 variations. This is done for both scenarios: 0.19 to 0.21 in 02_0_0, and to in n02_0_0. The slight variations will cause minimal impact on the tests in terms of how challenging they are for the robot. In order to create better sampling, the scenario with a decline in elevation gets an exception in the stopping criteria, which is changed to MaxIterations at a value of 20, 45

58 Symbol size is the amount of hits for that individual Figure 5.12: Scatterplot of solutions from two scenarios as opposed to using the MaxPredictedValue stopping criteria (see Section 4.3.1). The change in stopping criteria is due to the decline giving an overall increased fitness of the robot, which means that the MaxPredictedValue stopping criteria will find itself satisfied with the first individual that is evaluated, due to this individual performing as good - or better - than it did when the archive was created. As the IT&E isn t interested in creating performance as much as finding well performing individuals, the algorithm sees itself satisfied with the first evaluated individual. The reason to why the stopping criteria was changed was in order to provide a better sampling, allowing for better analysis of the difference between incline and decline in terms of solutions found. Having the IT&E stop at the first result because it was satisfactory, instead of searching for the best value within a limited amount of iteration, will cause the IT&E-optimization to mainly return the first solution it will evaluate, due to the stopping criteria calling it satisfactory. This is ideal behavior when wanting to adapt the robot for a new scenario, but will make the sampling of solution poor in the scenario with a decline in elevation. Figure 5.14 illustrates the pattern with positive tilt (incline) and negative tilt (decline) of elevation in the terrain. It is clear that there is some influence of the individuals chosen based on the tilt of the terrain. When applying a positive tilt, the IT&E tends to choose solutions that have high values of lift. This may be due to the incline in terrain requiring the robot to lift the legs higher up in order to get steps further forwards. There are, however, outliers in the sampled data, where the IT&E chooses solutions 46

59 Symbol size is the amount of hits for that individual Figure 5.13: Scatterplot of solutions from two scenarios with low lift and moderate sweep-values, which may indicate that it is possible to get good results regardless of the features of the gait patterns. When applying negative tilt, the IT&E is very determined on the solution, with high focus on single solutions. Figure 5.14 shows that the solutions found with negative tilt will almost always tend to have a low lift and high sweep, choosing the best value in the bottom high fitness area (see Section 5.1.4). This is assumed to be due to high lift causing the robot to become unstable when moving in a decline. Lift in the legs is not necessary, and may cause slower gaits, and instability of the robot Experiment 1 - IT&E Adaptation vs. Best Individual ELITE In the upcoming experiment, the best individual in the archive, which represents the optimization algorithm, is referred to as ELITE. Elite is the best individual in an archive trained for 2300 generation in scenario 0_0_0. Experiment In order to test how the proposed technique will improve the performance, it can be compared to the naive solution that does not include adaption. To perform this test the IT&E adaptation is put up against the best individual in the archive. The best individual in the archive is a result that is in principle the same as standard optimization algorithms (Section 2.3.1), 47

60 Symbol size is the amount of hits for that individual Figure 5.14: Plot of incline and decline from 0.19 to 0.21 and to With forced stopping criteria of MaxIterations set to 20 as the best result will always be any individual that performs best for a given task, i.e. the individual is already optimized for a solution. The test consists of two runs, the IT&E solution and the best individual, with both of them applied to seven different scenarios: 0_0_0, 0_150_15, 0_50_30, 01_0_0, 02_0_0, n01_0_0 and n02_0_0 (see Table 6.2). Figure 5.15 illustrates the effect of IT&E compared to standard optimization, and applying the individual best suited for a specific environment into other environments. ELITE performs best the scenario it is optimized for, meaning that the performance advantage is greatest at scenario 0_0_0. The solution found by IT&E is very close in terms of performance, with a fitness of , which is more than the fitness of the best individual with a fitness of On average, the solutions found by IT&E are better than the performance of ELITE. In some scenarios such as n01_0_0 and n02_0_0, the difference is not as significant, whereas in most other scenarios, the IT&E provides more robust and predictable solutions with a significantly higher mean fitness, and a crucial robustness eliminating any chances of the robot flipping over. The best individual will flip the robot in approximately 50% of the evaluations in the sampled data, as can be seen in Table 5.6. Although ELITE can perform well in some scenarios of environment, the mean fitness is generally low relative to the IT&E optimized solution, and is rendered infeasible by the fact that 50% of the evaluations resulted in the robot flipping over and becoming incapacitated. 48

61 5.3.4 Experiment 2 - IT&E Adaptation vs. Best Individual trained with obstacles ELITE_O ELITE_O is similar to ELITE, in that it is the best individual in an archive trained for 2300 generations. ELITE_O is however trained in scenario 0_150_15. Experiment To further investigate the performance of the IT&E-optimization the results from the algorithm are compared to another naive approach to the problem. Similar to the first experiment (Section 5.3.3), the experiment uses the best individual in the archive as a representation of the result from a standard optimization algorithm. In this test the IT&E-optimization is put up against the ELITE_O in the environment, in this case the archive was built in the scenario 0_150_15. This means that the individual should be more robust against obstacles, and discourage solutions that are overly dependent on the flatness of the terrain by for instance dragging the legs. The results can be seen in Figure 5.15b and Table 5.6. The results show a clear improvement over ELITE, with more robustness and a higher mean of performance. The results for the scenario 0_150_15 are lower than the score from ELITE, as well as the IT&E-optimized solution. The difference between ELITE at and ELITE_O at is , and the overall results from both of them are mainly governed by a few poor samples reducing the mean. The difference between the two variations of the best phenotype, ELITE and ELITE_O, can be disregarded as noise. The stability is too poor to provide any determined value of performance, which is indicated by the standard deviation of the fitness from the evaluations, seen in Table _0_0 01_0_0 0_150_15 02_0_0 0_50_30 n01_0_0 n02_0_0 Total ELITE 20/20 4/20 15/20 0/20 3/20 20/20 20/ % ELITE_O 20/20 20/20 19/20 19/20 20/20 6/20 20/ % IT&E Optimized 20/20 20/20 20/20 20/20 20/20 20/20 20/20 100% Table 5.5: Successful solutions out of 20 runs See Table 6.5 0_0_0 01_0_0 0_150_15 02_0_0 0_50_30 n01_0_0 n02_0_0 ELITE ELITE_O IT&E Optimized Table 5.6: Mean out of 20 runs See Table

62 0_0_0 01_0_0 0_150_15 02_0_0 0_50_30 n01_0_0 n02_0_0 ELITE ELITE_O IT&E Optimized Table 5.7: Standard deviation out of 20 runs 50

63 (a) Best Phenotype aka. ELITE (b) Best Phenotype optimized for aka. ELITE_O (c) IT&E Optimized One point for each unique solution Figure 5.15: IT&E vs. Best performance individuals See Table

64 52

65 Chapter 6 Conclusion This chapter will take on the results from chapter 5 and give some insight into what these results mean, along with the possibilities of expanding on these results. 6.1 Discussion The results show that there is a large gain in performance in comparison to the naive approach which only uses the best strategy for a single scenario. This means that there are obvious signs that the proposed method of gait adaptation will work better than the naive approach without adaptation to the changes in environment, as is expected when applying robots evolved with a genetic algorithm that optimizes and overfits2.1.4 to a particular environment Performance of the MAP-Elites archive The MAP-Elites archive gave a very interesting insight into the correlation of the features and fitness. Analyzing the archive was in of itself a very valuable outcome of the results, as it can help to understand what causes fitness of the robot in particular scenarios, helping in understanding the key features that define a well performing robot. This can aid the design process of the robot. For instance if the MAP-Elites archive can visualize a rapid drop in fitness when the lift of the legs is above a certain value, then it might prove useful to limit the lift of the legs in order to narrow the search space to areas that are useful to the robot Performance of the IT&E-optimization The IT&E-optimization vs. Best Individuals ELITE and ELITE_O is a way to measure how much of a difference the optimization of gait patterns has over the naive method of applying the best optimized solution into a new environment. The performance gain in Section can be used to prove that use of adaptation using IT&E has a large impact, with a mean performance-increase of 39.4% (Figure 5.6) from to , and 53

66 a stability increase of 50.6% up to 100% stability (Figure 5.5) for the optimized results, and for best individual. IT&E also saw an increase of 10% from the stability of ELITE_O. Significance of features One issue with the IT&E algorithm is that the optimization works on the robustness of the individuals, rather than adjusting the features of the solutions to adapt the robot. This may imply that the trial and error is more focused on trying out a local area of high fitness, and that features are irrelevant in the adaptation-process. This may imply that the IT&E algorithm does not gain the performance advantage through adaptation, but rather a small localized search of well performing individuals, where the trial and error is performed on the best performing individuals in the archive. The features may still have an impact on the performance after the IT&E has exhausted the best individuals in the archive, but usually the best individuals that perform worse than expected are still performing better than the lower ranking individuals in the archive. Despite this, one might argue that Section shows a pattern that points towards some reliance of features in the way that the solutions change as the scenario changes. The solutions found by IT&E are mostly located in the areas of high fitness, described by Section and visualized by Figure 5.3. These areas are the first ones to be exhausted by trial-and-error before the search trends towards areas of lower fitness. This preference for high fitness areas may argue for higher values of MaxIterations so that the IT&E may explore more of the archive, along with exploring the aspects of features before it settles for a low performing individual that has a high degree of confidence due to the high archived fitness-value. It is more difficult to have a preferred behavior in the gait adaptation scenario in comparison to the application that is used in [9], where the feature-space is closely related to the performance. In the application of adapting the gait according to the terrain, it is not certain whether high leg lift is the better approach for rough terrains, or that small steps are the best way to climb an incline. It is believed that this strong significance and reliance on features is essential for IT&E to work well If the insignificance of features is proven to be a problem, the same performance gain can be archived by simply taking the best performing individuals of the MAP-Elites archive, trying them all out in the new scenario and selecting the best one. Whether or not the features are irrelevant does not invalidate the performance gain of using this proposed technique MAP-Elites and IT&E vs. Traditional optimization The benefit of MAP-Elites and IT&E is the fact that the archive can be trained once, then the optimization is applied instantly. During the testing in Chapter 5 the evaluations from IT&E were close to instantaneous as the MAP-Elites archive had already evolved the required solutions. The 54

67 naive approach to adapting to the gait pattern is to start the evolution again every time that the robot encounters an environment where it needs to re-adapt, which may take a very long time, and give little gain in fitness over adapting using IT&E. In order to compare IT&E to traditional optimization-algorithms, the assumption is that the best individual is equal to a solution using optimization techniques with standard evolutionary algorithms. To do a fair comparison between illumination algorithms and optimization algorithms, the test uses the results from the experiments (Section 5.3.3/5.3.4) with the mean of the upper quartile of ELITE, and the mean of the upper quartile of ELITE_O (see Table 6.5 in appendix). ELITE and ELITE_O are the best performing individuals in two different archives, one trained without obstacles, ELITE, and one trained in scenario 0_150_15, which is called ELITE_O or best phenotype with obstacles. The reason for using a sample in the upper quartile is because the evaluations done by ELITE and ELITE_O were clearly too unstable (see Table 5.7) to use the mean as a way of representing how well the best individuals perform regardless of stability issues. Choosing the mean of the upper quartile gives a better overview of the performance, rather than the stability, since it disregards the unstable factor of the ELITE and ELITE_O caused by what is assumed to be overfitting (See 2.1.4) to the environment and sensitivity to changes. The reason why such assumptions and simplifications are made is to see how the IT&E-optimization measures up to an ideal optimizationalgorithm, i.e. a standard genetic algorithm. These results tell us that optimizing for a specific environment using traditional genetic algorithms. This will render results about as good as IT&E in some cases. It is suspected that there will be a huge influence of overfitting in these types of applications, which may explain why ELITE_O is working so poorly in the scenarios it was trained in, even within the upper quartile of the results. The biggest reason to why IT&E is the better alternative of these three approaches is due to IT&E only needing to train once to adapt itself to any of the terrain-types, while the ELITE and ELITE_O represents a standard optimization algorithm which has to be trained for each type of scenario. The result is that even with with a small performance gain using a specifically designed individual, the computation cost will make the optimization approach less attractive in these types of applications. Type Mean upper quartile ELITE ELITE_O IT&E Optimized Table 6.1: Mean of Upper Quartile (Top 5 samples) 55

68 6.1.4 Future Work Explore the significance of features As mentioned in Section there still needs to be more work done in researching the significance of the feature; in what determines the gait pattern. The current system does not map the features and performance well enough. By drawing a conclusion from the results seen in Chapter 5, there is a possibility that this method can be extended by expanding the dimensionality or changing the current features to investigate what causes the behavior of a gait in a given environment. Suggestions to how the archiving can be changed is for instance, by mapping the posture of the legs, factoring in the body-joints or the speed of the different joints. Exploring other factors that make the robot adaptable may lead to further increased performance and a stronger capability of adaptation. These factors can either be used as a replacement of the current sweep- and lift-features, or as an extra dimension but with regards to The Curse of Dimensionality (Section 2.5). Implement with dynamic gaits One of the biggest weaknesses of the simulated robot is that the passive open loop gait pattern creates a very unstable and clumsy robot that is very susceptible to chance in terms of fitness. Although unsupported open-loop walking can usually create stable systems [31], there is a clear weakness in the random nature of the terrain. It may be possible to further improve fitness by using IT&E as a component in a dynamic gait pattern where the optimization of gaits will be part of a larger control-system. Implementation of control-systems can use feedback from the IT&E as an alternative to other gait adaptation, such as composite systems like BigDog from Boston Dynamics. [28, 32] Complexity The Curse of Dimensionality (Section 2.5) is crucial in the generation of the MAP-Elites archive. In such applications it might be beneficial to add multiple dimensions to the feature-space in order to better separate the data in the multi-dimensional space. Future work may expand on complexity of the archive and do analysis of the significance that Curse of Dimensionality has on the training of MAP-Elites. It could also possibly explore the gains that come with more computational power, higher resolution of the MAP-Elites archive and running the computation for longer to gain further improve on the generations. Figure 5.8 illustrates that the archive is still evolving after 2300 generations, and that the trend in mean fitness of the archive can be extrapolated to increase for more generations. With more dimensions to the archive, or higher resolution, the computation becomes very intense, but may give a higher quality in adaptation, where 56

69 for instance the individual joints may be adjusted in the feature space to better adapt for the environment. Future work could explore if there are increases in performance by increasing the complexity and demand for more powerful computers. Transfer to the real world, crossing the reality gap Future work can expand that done with MAP-Elites, IT&E and gait adaptation by moving it into the real world, seeing how IT&E can help in crossing the reality gap[19] and how the simulation done in Chapter 5 will act. Moving a simulated reality over to the real world will raise some serious challenges with the uncertainty of the environment, where the conditions and events are no longer in control of the user. Another challenge of the real world is how IT&E will do the evaluations of individuals, as the knowledge of the environment is very limited in comparison of the knowledge that a simulation can provide. IT&E can act as a connection between simulation and reality, where it may help in transferring the simulated archive over to the real world by finding features that transfer well, and limit the confidence of the features that does not perform as well when transferred. Expand on simulation The simulation in the current form is very simplified in terms of features. As mentioned in Section 2.2, simplification may be necessary in order to make the results transferable to the real world. There are, however, some aspects of the simulation that can be expanded without adding a significant amount of complexity into the simulation. Simulation of other types of terrain, such as sand, snow, dirt, mud, asphalt, or grass can expand the optimization. This will help gaining better performance in different types of environment where the friction of the legs may have an impact of how the gait pattern should be. A suggestion for future iterations of the simulator can also be to simulate unevenness of the ground, as the current optimization only applies to obstacles on a simple flat plane, positioned in different angles. By making the simulation have a more realistic unevenness of the ground, the quality will improve, as the current optimization only applies to this simple environment. 6.2 Conclusion Using the sampled data from the evaluation done by IT&E in comparison to how ELITE and ELITE_O performs as the environment changes there are clear signs of a performance increase by using IT&E-optimization. In both scenarios that ELITE and ELITE_O was optimized for, as well as scenarios that was previously unseen, ELITE and ELITE_O had a significantly lower mean performance than the IT&E-optimized solutions. The tests performed on IT&E revealed a mean performance gain of 38.7% 57

70 over the mean results from ELITE, and a 23.7% mean performance gain over ELITE_O (See Figure 6.1). IT&E-optimization increased the stability of the robot by 48.7% from ELITE, and 22.5% from ELITE_O, based on the data from Table 5.5. IT&E show a 100% stability in the evaluation, meaning that none of the robots flipped during the evaluations. The results may, however, cause some reasonable doubt to whether the IT&E is actually adapting to the environment. The issue is to figure out whether IT&E is driven by the fitness of the individuals in the archive, by nitpicking the ones that happened to perform well, as opposed to actually taking the features into consideration (See Section and 6.1.2). The results show an objectively good performance-increase. However, the cause of this increase may need further research. Figure 6.1: Mean fitness of ELITE/ELITE_O vs. IT&E 58

Chapter 5 Components for Evolution of Modular Artificial Neural Networks

Chapter 5 Components for Evolution of Modular Artificial Neural Networks Chapter 5 Components for Evolution of Modular Artificial Neural Networks 5.1 Introduction In this chapter, the methods and components used for modular evolution of Artificial Neural Networks (ANNs) are

More information

Artificial Intelligence Application (Genetic Algorithm)

Artificial Intelligence Application (Genetic Algorithm) Babylon University College of Information Technology Software Department Artificial Intelligence Application (Genetic Algorithm) By Dr. Asaad Sabah Hadi 2014-2015 EVOLUTIONARY ALGORITHM The main idea about

More information

Genetic Algorithms. PHY 604: Computational Methods in Physics and Astrophysics II

Genetic Algorithms. PHY 604: Computational Methods in Physics and Astrophysics II Genetic Algorithms Genetic Algorithms Iterative method for doing optimization Inspiration from biology General idea (see Pang or Wikipedia for more details): Create a collection of organisms/individuals

More information

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms Topological Machining Fixture Layout Synthesis Using Genetic Algorithms Necmettin Kaya Uludag University, Mechanical Eng. Department, Bursa, Turkey Ferruh Öztürk Uludag University, Mechanical Eng. Department,

More information

Evolutionary Algorithms. CS Evolutionary Algorithms 1

Evolutionary Algorithms. CS Evolutionary Algorithms 1 Evolutionary Algorithms CS 478 - Evolutionary Algorithms 1 Evolutionary Computation/Algorithms Genetic Algorithms l Simulate natural evolution of structures via selection and reproduction, based on performance

More information

Genetic Algorithms. Kang Zheng Karl Schober

Genetic Algorithms. Kang Zheng Karl Schober Genetic Algorithms Kang Zheng Karl Schober Genetic algorithm What is Genetic algorithm? A genetic algorithm (or GA) is a search technique used in computing to find true or approximate solutions to optimization

More information

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you?

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you? Gurjit Randhawa Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you? This would be nice! Can it be done? A blind generate

More information

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS 6.1 Introduction Gradient-based algorithms have some weaknesses relative to engineering optimization. Specifically, it is difficult to use gradient-based algorithms

More information

Introduction to Evolutionary Computation

Introduction to Evolutionary Computation Introduction to Evolutionary Computation The Brought to you by (insert your name) The EvoNet Training Committee Some of the Slides for this lecture were taken from the Found at: www.cs.uh.edu/~ceick/ai/ec.ppt

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

Genetic Programming. Charles Chilaka. Department of Computational Science Memorial University of Newfoundland

Genetic Programming. Charles Chilaka. Department of Computational Science Memorial University of Newfoundland Genetic Programming Charles Chilaka Department of Computational Science Memorial University of Newfoundland Class Project for Bio 4241 March 27, 2014 Charles Chilaka (MUN) Genetic algorithms and programming

More information

Introduction to Genetic Algorithms

Introduction to Genetic Algorithms Advanced Topics in Image Analysis and Machine Learning Introduction to Genetic Algorithms Week 3 Faculty of Information Science and Engineering Ritsumeikan University Today s class outline Genetic Algorithms

More information

Evolutionary form design: the application of genetic algorithmic techniques to computer-aided product design

Evolutionary form design: the application of genetic algorithmic techniques to computer-aided product design Loughborough University Institutional Repository Evolutionary form design: the application of genetic algorithmic techniques to computer-aided product design This item was submitted to Loughborough University's

More information

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Chapter 5 A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Graph Matching has attracted the exploration of applying new computing paradigms because of the large number of applications

More information

What is GOSET? GOSET stands for Genetic Optimization System Engineering Tool

What is GOSET? GOSET stands for Genetic Optimization System Engineering Tool Lecture 5: GOSET 1 What is GOSET? GOSET stands for Genetic Optimization System Engineering Tool GOSET is a MATLAB based genetic algorithm toolbox for solving optimization problems 2 GOSET Features Wide

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

CS5401 FS2015 Exam 1 Key

CS5401 FS2015 Exam 1 Key CS5401 FS2015 Exam 1 Key This is a closed-book, closed-notes exam. The only items you are allowed to use are writing implements. Mark each sheet of paper you use with your name and the string cs5401fs2015

More information

Escaping Local Optima: Genetic Algorithm

Escaping Local Optima: Genetic Algorithm Artificial Intelligence Escaping Local Optima: Genetic Algorithm Dae-Won Kim School of Computer Science & Engineering Chung-Ang University We re trying to escape local optima To achieve this, we have learned

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Approximation Algorithms and Heuristics November 21, 2016 École Centrale Paris, Châtenay-Malabry, France Dimo Brockhoff Inria Saclay Ile-de-France 2 Exercise: The Knapsack

More information

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA The Binary Genetic Algorithm Universidad de los Andes-CODENSA 1. Genetic Algorithms: Natural Selection on a Computer Figure 1 shows the analogy between biological i l evolution and a binary GA. Both start

More information

Evolving SQL Queries for Data Mining

Evolving SQL Queries for Data Mining Evolving SQL Queries for Data Mining Majid Salim and Xin Yao School of Computer Science, The University of Birmingham Edgbaston, Birmingham B15 2TT, UK {msc30mms,x.yao}@cs.bham.ac.uk Abstract. This paper

More information

GENETIC ALGORITHM with Hands-On exercise

GENETIC ALGORITHM with Hands-On exercise GENETIC ALGORITHM with Hands-On exercise Adopted From Lecture by Michael Negnevitsky, Electrical Engineering & Computer Science University of Tasmania 1 Objective To understand the processes ie. GAs Basic

More information

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP Wael Raef Alkhayri Fahed Al duwairi High School Aljabereyah, Kuwait Suhail Sami Owais Applied Science Private University Amman,

More information

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter Review: Final Exam Model for a Learning Step Learner initially Environm ent Teacher Compare s pe c ia l Information Control Correct Learning criteria Feedback changed Learner after Learning Learning by

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Approximation Algorithms and Heuristics November 6, 2015 École Centrale Paris, Châtenay-Malabry, France Dimo Brockhoff INRIA Lille Nord Europe 2 Exercise: The Knapsack Problem

More information

Heuristic Optimisation

Heuristic Optimisation Heuristic Optimisation Part 10: Genetic Algorithm Basics Sándor Zoltán Németh http://web.mat.bham.ac.uk/s.z.nemeth s.nemeth@bham.ac.uk University of Birmingham S Z Németh (s.nemeth@bham.ac.uk) Heuristic

More information

4/22/2014. Genetic Algorithms. Diwakar Yagyasen Department of Computer Science BBDNITM. Introduction

4/22/2014. Genetic Algorithms. Diwakar Yagyasen Department of Computer Science BBDNITM. Introduction 4/22/24 s Diwakar Yagyasen Department of Computer Science BBDNITM Visit dylycknow.weebly.com for detail 2 The basic purpose of a genetic algorithm () is to mimic Nature s evolutionary approach The algorithm

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

CHAPTER 5 ENERGY MANAGEMENT USING FUZZY GENETIC APPROACH IN WSN

CHAPTER 5 ENERGY MANAGEMENT USING FUZZY GENETIC APPROACH IN WSN 97 CHAPTER 5 ENERGY MANAGEMENT USING FUZZY GENETIC APPROACH IN WSN 5.1 INTRODUCTION Fuzzy systems have been applied to the area of routing in ad hoc networks, aiming to obtain more adaptive and flexible

More information

EMO A Real-World Application of a Many-Objective Optimisation Complexity Reduction Process

EMO A Real-World Application of a Many-Objective Optimisation Complexity Reduction Process EMO 2013 A Real-World Application of a Many-Objective Optimisation Complexity Reduction Process Robert J. Lygoe, Mark Cary, and Peter J. Fleming 22-March-2013 Contents Introduction Background Process Enhancements

More information

Chapter 9: Genetic Algorithms

Chapter 9: Genetic Algorithms Computational Intelligence: Second Edition Contents Compact Overview First proposed by Fraser in 1957 Later by Bremermann in 1962 and Reed et al in 1967 Popularized by Holland in 1975 Genetic algorithms

More information

Evolutionary origins of modularity

Evolutionary origins of modularity Evolutionary origins of modularity Jeff Clune, Jean-Baptiste Mouret and Hod Lipson Proceedings of the Royal Society B 2013 Presented by Raghav Partha Evolvability Evolvability capacity to rapidly adapt

More information

Non-deterministic Search techniques. Emma Hart

Non-deterministic Search techniques. Emma Hart Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting

More information

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving

More information

Information Fusion Dr. B. K. Panigrahi

Information Fusion Dr. B. K. Panigrahi Information Fusion By Dr. B. K. Panigrahi Asst. Professor Department of Electrical Engineering IIT Delhi, New Delhi-110016 01/12/2007 1 Introduction Classification OUTLINE K-fold cross Validation Feature

More information

Genetic Algorithms for Vision and Pattern Recognition

Genetic Algorithms for Vision and Pattern Recognition Genetic Algorithms for Vision and Pattern Recognition Faiz Ul Wahab 11/8/2014 1 Objective To solve for optimization of computer vision problems using genetic algorithms 11/8/2014 2 Timeline Problem: Computer

More information

The Parallel Software Design Process. Parallel Software Design

The Parallel Software Design Process. Parallel Software Design Parallel Software Design The Parallel Software Design Process Deborah Stacey, Chair Dept. of Comp. & Info Sci., University of Guelph dastacey@uoguelph.ca Why Parallel? Why NOT Parallel? Why Talk about

More information

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com ATI Material Material Do Not Duplicate ATI Material Boost Your Skills with On-Site Courses Tailored to Your Needs www.aticourses.com The Applied Technology Institute specializes in training programs for

More information

Mutations for Permutations

Mutations for Permutations Mutations for Permutations Insert mutation: Pick two allele values at random Move the second to follow the first, shifting the rest along to accommodate Note: this preserves most of the order and adjacency

More information

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization 2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic

More information

Outline. Motivation. Introduction of GAs. Genetic Algorithm 9/7/2017. Motivation Genetic algorithms An illustrative example Hypothesis space search

Outline. Motivation. Introduction of GAs. Genetic Algorithm 9/7/2017. Motivation Genetic algorithms An illustrative example Hypothesis space search Outline Genetic Algorithm Motivation Genetic algorithms An illustrative example Hypothesis space search Motivation Evolution is known to be a successful, robust method for adaptation within biological

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Introduction to Genetic Algorithms. Based on Chapter 10 of Marsland Chapter 9 of Mitchell

Introduction to Genetic Algorithms. Based on Chapter 10 of Marsland Chapter 9 of Mitchell Introduction to Genetic Algorithms Based on Chapter 10 of Marsland Chapter 9 of Mitchell Genetic Algorithms - History Pioneered by John Holland in the 1970s Became popular in the late 1980s Based on ideas

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding e Scientific World Journal, Article ID 746260, 8 pages http://dx.doi.org/10.1155/2014/746260 Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding Ming-Yi

More information

Hybridization EVOLUTIONARY COMPUTING. Reasons for Hybridization - 1. Naming. Reasons for Hybridization - 3. Reasons for Hybridization - 2

Hybridization EVOLUTIONARY COMPUTING. Reasons for Hybridization - 1. Naming. Reasons for Hybridization - 3. Reasons for Hybridization - 2 Hybridization EVOLUTIONARY COMPUTING Hybrid Evolutionary Algorithms hybridization of an EA with local search techniques (commonly called memetic algorithms) EA+LS=MA constructive heuristics exact methods

More information

Genetic Algorithms. Genetic Algorithms

Genetic Algorithms. Genetic Algorithms A biological analogy for optimization problems Bit encoding, models as strings Reproduction and mutation -> natural selection Pseudo-code for a simple genetic algorithm The goal of genetic algorithms (GA):

More information

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?)

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?) SKIP - May 2004 Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?) S. G. Hohmann, Electronic Vision(s), Kirchhoff Institut für Physik, Universität Heidelberg Hardware Neuronale Netzwerke

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department ofmechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

A Comparison of the Iterative Fourier Transform Method and. Evolutionary Algorithms for the Design of Diffractive Optical.

A Comparison of the Iterative Fourier Transform Method and. Evolutionary Algorithms for the Design of Diffractive Optical. A Comparison of the Iterative Fourier Transform Method and Evolutionary Algorithms for the Design of Diffractive Optical Elements Philip Birch, Rupert Young, Maria Farsari, David Budgett, John Richardson,

More information

Machine Evolution. Machine Evolution. Let s look at. Machine Evolution. Machine Evolution. Machine Evolution. Machine Evolution

Machine Evolution. Machine Evolution. Let s look at. Machine Evolution. Machine Evolution. Machine Evolution. Machine Evolution Let s look at As you will see later in this course, neural networks can learn, that is, adapt to given constraints. For example, NNs can approximate a given function. In biology, such learning corresponds

More information

Evolutionary Computation for Combinatorial Optimization

Evolutionary Computation for Combinatorial Optimization Evolutionary Computation for Combinatorial Optimization Günther Raidl Vienna University of Technology, Vienna, Austria raidl@ads.tuwien.ac.at EvoNet Summer School 2003, Parma, Italy August 25, 2003 Evolutionary

More information

Interactive Math Glossary Terms and Definitions

Interactive Math Glossary Terms and Definitions Terms and Definitions Absolute Value the magnitude of a number, or the distance from 0 on a real number line Addend any number or quantity being added addend + addend = sum Additive Property of Area the

More information

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Evolutionary Computation Algorithms for Cryptanalysis: A Study Evolutionary Computation Algorithms for Cryptanalysis: A Study Poonam Garg Information Technology and Management Dept. Institute of Management Technology Ghaziabad, India pgarg@imt.edu Abstract The cryptanalysis

More information

Software Vulnerability

Software Vulnerability Software Vulnerability Refers to a weakness in a system allowing an attacker to violate the integrity, confidentiality, access control, availability, consistency or audit mechanism of the system or the

More information

An Introduction to Evolutionary Algorithms

An Introduction to Evolutionary Algorithms An Introduction to Evolutionary Algorithms Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information Technology Karthik.sindhya@jyu.fi http://users.jyu.fi/~kasindhy/

More information

Genetic Algorithm Based Template Optimization for a Vision System: Obstacle Detection

Genetic Algorithm Based Template Optimization for a Vision System: Obstacle Detection ISTET'09 Umair Ali Khan, Alireza Fasih, Kyandoghere Kyamakya, Jean Chamberlain Chedjou Transportation Informatics Group, Alpen Adria University, Klagenfurt, Austria. Genetic Algorithm Based Template Optimization

More information

The Table: An Illustration of Evolutionary Design using Genetic Algorithms

The Table: An Illustration of Evolutionary Design using Genetic Algorithms The Table: An Illustration of Evolutionary Design using Genetic Algorithms PETER J BENTLEY & JONATHAN P WAKEFIELD DIVISION OF COMPUTER AND INFORMATION ENGINEERING, SCHOOL OF ENGINEERING, UNIVERSITY OF

More information

Path Planning Optimization Using Genetic Algorithm A Literature Review

Path Planning Optimization Using Genetic Algorithm A Literature Review International Journal of Computational Engineering Research Vol, 03 Issue, 4 Path Planning Optimization Using Genetic Algorithm A Literature Review 1, Er. Waghoo Parvez, 2, Er. Sonal Dhar 1, (Department

More information

[Premalatha, 4(5): May, 2015] ISSN: (I2OR), Publication Impact Factor: (ISRA), Journal Impact Factor: 2.114

[Premalatha, 4(5): May, 2015] ISSN: (I2OR), Publication Impact Factor: (ISRA), Journal Impact Factor: 2.114 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY GENETIC ALGORITHM FOR OPTIMIZATION PROBLEMS C. Premalatha Assistant Professor, Department of Information Technology Sri Ramakrishna

More information

Genetic Algorithms. Chapter 3

Genetic Algorithms. Chapter 3 Chapter 3 1 Contents of this Chapter 2 Introductory example. Representation of individuals: Binary, integer, real-valued, and permutation. Mutation operator. Mutation for binary, integer, real-valued,

More information

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Optimisation problems Optimisation & search Two Examples The knapsack problem

More information

A genetic algorithms approach to optimization parameter space of Geant-V prototype

A genetic algorithms approach to optimization parameter space of Geant-V prototype A genetic algorithms approach to optimization parameter space of Geant-V prototype Oksana Shadura CERN, PH-SFT & National Technical Univ. of Ukraine Kyiv Polytechnic Institute Geant-V parameter space [1/2]

More information

Multi-objective Optimization

Multi-objective Optimization Jugal K. Kalita Single vs. Single vs. Single Objective Optimization: When an optimization problem involves only one objective function, the task of finding the optimal solution is called single-objective

More information

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7. Chapter 7: Derivative-Free Optimization Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.5) Jyh-Shing Roger Jang et al.,

More information

Evolutionary Approaches for Resilient Surveillance Management. Ruidan Li and Errin W. Fulp. U N I V E R S I T Y Department of Computer Science

Evolutionary Approaches for Resilient Surveillance Management. Ruidan Li and Errin W. Fulp. U N I V E R S I T Y Department of Computer Science Evolutionary Approaches for Resilient Surveillance Management Ruidan Li and Errin W. Fulp WAKE FOREST U N I V E R S I T Y Department of Computer Science BioSTAR Workshop, 2017 Surveillance Systems Growing

More information

OptimizationOf Straight Movement 6 Dof Robot Arm With Genetic Algorithm

OptimizationOf Straight Movement 6 Dof Robot Arm With Genetic Algorithm OptimizationOf Straight Movement 6 Dof Robot Arm With Genetic Algorithm R. Suryoto Edy Raharjo Oyas Wahyunggoro Priyatmadi Abstract This paper proposes a genetic algorithm (GA) to optimize the straight

More information

Pseudo-code for typical EA

Pseudo-code for typical EA Extra Slides for lectures 1-3: Introduction to Evolutionary algorithms etc. The things in slides were more or less presented during the lectures, combined by TM from: A.E. Eiben and J.E. Smith, Introduction

More information

Biology in Computation: Evolving Intelligent Controllers

Biology in Computation: Evolving Intelligent Controllers Biology in Computation: Evolving Intelligent Controllers Combining Genetic Algorithms with Neural Networks: Implementation & Application Dimitrios N. Terzopoulos terzopod@math.auth.gr 18/1/2017 Contents

More information

Feature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Feature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Feature Selection CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Dimensionality reduction Feature selection vs. feature extraction Filter univariate

More information

Lecture 6: Genetic Algorithm. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved

Lecture 6: Genetic Algorithm. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved Lecture 6: Genetic Algorithm An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved Lec06/1 Search and optimization again Given a problem, the set of all possible

More information

The k-means Algorithm and Genetic Algorithm

The k-means Algorithm and Genetic Algorithm The k-means Algorithm and Genetic Algorithm k-means algorithm Genetic algorithm Rough set approach Fuzzy set approaches Chapter 8 2 The K-Means Algorithm The K-Means algorithm is a simple yet effective

More information

Genetic Algorithms Variations and Implementation Issues

Genetic Algorithms Variations and Implementation Issues Genetic Algorithms Variations and Implementation Issues CS 431 Advanced Topics in AI Classic Genetic Algorithms GAs as proposed by Holland had the following properties: Randomly generated population Binary

More information

CHAPTER 4 GENETIC ALGORITHM

CHAPTER 4 GENETIC ALGORITHM 69 CHAPTER 4 GENETIC ALGORITHM 4.1 INTRODUCTION Genetic Algorithms (GAs) were first proposed by John Holland (Holland 1975) whose ideas were applied and expanded on by Goldberg (Goldberg 1989). GAs is

More information

DERIVATIVE-FREE OPTIMIZATION

DERIVATIVE-FREE OPTIMIZATION DERIVATIVE-FREE OPTIMIZATION Main bibliography J.-S. Jang, C.-T. Sun and E. Mizutani. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. Prentice Hall, New Jersey,

More information

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS Jim Gasvoda and Qin Ding Department of Computer Science, Pennsylvania State University at Harrisburg, Middletown, PA 17057, USA {jmg289, qding}@psu.edu

More information

CHAPTER 5. CHE BASED SoPC FOR EVOLVABLE HARDWARE

CHAPTER 5. CHE BASED SoPC FOR EVOLVABLE HARDWARE 90 CHAPTER 5 CHE BASED SoPC FOR EVOLVABLE HARDWARE A hardware architecture that implements the GA for EHW is presented in this chapter. This SoPC (System on Programmable Chip) architecture is also designed

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Informed Search and Exploration Chapter 4 (4.3 4.6) Searching: So Far We ve discussed how to build goal-based and utility-based agents that search to solve problems We ve also presented

More information

Unsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition

Unsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition Unsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition M. Morita,2, R. Sabourin 3, F. Bortolozzi 3 and C. Y. Suen 2 École de Technologie Supérieure, Montreal,

More information

Introduction to Design Optimization: Search Methods

Introduction to Design Optimization: Search Methods Introduction to Design Optimization: Search Methods 1-D Optimization The Search We don t know the curve. Given α, we can calculate f(α). By inspecting some points, we try to find the approximated shape

More information

Multi-Objective Optimization Using Genetic Algorithms

Multi-Objective Optimization Using Genetic Algorithms Multi-Objective Optimization Using Genetic Algorithms Mikhail Gaerlan Computational Physics PH 4433 December 8, 2015 1 Optimization Optimization is a general term for a type of numerical problem that involves

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem An Evolutionary Algorithm for the Multi-objective Shortest Path Problem Fangguo He Huan Qi Qiong Fan Institute of Systems Engineering, Huazhong University of Science & Technology, Wuhan 430074, P. R. China

More information

3 Nonlinear Regression

3 Nonlinear Regression CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic

More information

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing 1. Introduction 2. Cutting and Packing Problems 3. Optimisation Techniques 4. Automated Packing Techniques 5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing 6.

More information

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions Data In single-program multiple-data (SPMD) parallel programs, global data is partitioned, with a portion of the data assigned to each processing node. Issues relevant to choosing a partitioning strategy

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Tool 1: Standards for Mathematical ent: Interpreting Functions CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Name of Reviewer School/District Date Name of Curriculum Materials:

More information

Optimate CFD Evaluation Optimate Glider Optimization Case

Optimate CFD Evaluation Optimate Glider Optimization Case Optimate CFD Evaluation Optimate Glider Optimization Case Authors: Nathan Richardson LMMFC CFD Lead 1 Purpose For design optimization, the gold standard would be to put in requirements and have algorithm

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

Introduction. Chapter Overview

Introduction. Chapter Overview Chapter 1 Introduction The Hough Transform is an algorithm presented by Paul Hough in 1962 for the detection of features of a particular shape like lines or circles in digitalized images. In its classical

More information

A Novel Approach to Planar Mechanism Synthesis Using HEEDS

A Novel Approach to Planar Mechanism Synthesis Using HEEDS AB-2033 Rev. 04.10 A Novel Approach to Planar Mechanism Synthesis Using HEEDS John Oliva and Erik Goodman Michigan State University Introduction The problem of mechanism synthesis (or design) is deceptively

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

CHAPTER 4 FEATURE SELECTION USING GENETIC ALGORITHM

CHAPTER 4 FEATURE SELECTION USING GENETIC ALGORITHM CHAPTER 4 FEATURE SELECTION USING GENETIC ALGORITHM In this research work, Genetic Algorithm method is used for feature selection. The following section explains how Genetic Algorithm is used for feature

More information

V.Petridis, S. Kazarlis and A. Papaikonomou

V.Petridis, S. Kazarlis and A. Papaikonomou Proceedings of IJCNN 93, p.p. 276-279, Oct. 993, Nagoya, Japan. A GENETIC ALGORITHM FOR TRAINING RECURRENT NEURAL NETWORKS V.Petridis, S. Kazarlis and A. Papaikonomou Dept. of Electrical Eng. Faculty of

More information

Reducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm

Reducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm Reducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm Dr. Ian D. Wilson School of Technology, University of Glamorgan, Pontypridd CF37 1DL, UK Dr. J. Mark Ware School of Computing,

More information

EVOLVING LEGO. Exploring the impact of alternative encodings on the performance of evolutionary algorithms. 1. Introduction

EVOLVING LEGO. Exploring the impact of alternative encodings on the performance of evolutionary algorithms. 1. Introduction N. Gu, S. Watanabe, H. Erhan, M. Hank Haeusler, W. Huang, R. Sosa (eds.), Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer- Aided

More information

The Curse of Dimensionality

The Curse of Dimensionality The Curse of Dimensionality ACAS 2002 p1/66 Curse of Dimensionality The basic idea of the curse of dimensionality is that high dimensional data is difficult to work with for several reasons: Adding more

More information

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree 28 Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree 1 Tanu Gupta, 2 Anil Kumar 1 Research Scholar, IFTM, University, Moradabad, India. 2 Sr. Lecturer, KIMT, Moradabad, India. Abstract Many

More information

INTERACTIVE MULTI-OBJECTIVE GENETIC ALGORITHMS FOR THE BUS DRIVER SCHEDULING PROBLEM

INTERACTIVE MULTI-OBJECTIVE GENETIC ALGORITHMS FOR THE BUS DRIVER SCHEDULING PROBLEM Advanced OR and AI Methods in Transportation INTERACTIVE MULTI-OBJECTIVE GENETIC ALGORITHMS FOR THE BUS DRIVER SCHEDULING PROBLEM Jorge PINHO DE SOUSA 1, Teresa GALVÃO DIAS 1, João FALCÃO E CUNHA 1 Abstract.

More information