EVOLUTIONARY OPTIMIZATION OF A FLOW LINE USED ExtendSim BUILT-IN OPTIMIZER

Similar documents
CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Introduction to Optimization

Introduction to Optimization

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

CHAPTER 4 GENETIC ALGORITHM

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

OPTIMIZATION METHODS. For more information visit: or send an to:

GENETIC ALGORITHM with Hands-On exercise

Introduction to Genetic Algorithms

Mutations for Permutations

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

An Introduction to Evolutionary Algorithms

CS5401 FS2015 Exam 1 Key

Chapter 9: Genetic Algorithms

Particle Swarm Optimization applied to Pattern Recognition

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

Automata Construct with Genetic Algorithm

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

METAHEURISTICS Genetic Algorithm

Genetic Algorithms Variations and Implementation Issues

Evolutionary Computation Algorithms for Cryptanalysis: A Study

The Genetic Algorithm for finding the maxima of single-variable functions

Computational Intelligence

Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm

Genetic Algorithms. Kang Zheng Karl Schober

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

Chapter 14 Global Search Algorithms

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms

Multi-Objective Optimization Using Genetic Algorithms

AN EVOLUTIONARY APPROACH TO DISTANCE VECTOR ROUTING

Optimizing Flow Shop Sequencing Through Simulation Optimization Using Evolutionary Methods

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree

SELF-ADAPTATION IN GENETIC ALGORITHMS USING MULTIPLE GENOMIC REDUNDANT REPRESENTATIONS ABSTRACT

Evolution Strategies in the Multipoint Connections Routing

CHAPTER 5 ENERGY MANAGEMENT USING FUZZY GENETIC APPROACH IN WSN

MINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS

Chapter 15 Introduction to Linear Programming

Evolutionary Algorithms. CS Evolutionary Algorithms 1

Multi-objective Optimization

Genetic Algorithms. Chapter 3

Genetic Algorithms For Vertex. Splitting in DAGs 1

Evolutionary Computation Part 2

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you?

Using Genetic Algorithms to optimize ACS-TSP

Information Fusion Dr. B. K. Panigrahi

The Parallel Software Design Process. Parallel Software Design

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP

Non-deterministic Search techniques. Emma Hart

MULTI-OBJECTIVE GENETIC LOCAL SEARCH ALGORITHM FOR SUPPLY CHAIN SIMULATION OPTIMISATION

Escaping Local Optima: Genetic Algorithm

Introduction to Optimization

Structural Optimizations of a 12/8 Switched Reluctance Motor using a Genetic Algorithm

A motivated definition of exploitation and exploration

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

Evolutionary Computation for Combinatorial Optimization

Grid Scheduling Strategy using GA (GSSGA)

METAHEURISTIC. Jacques A. Ferland Department of Informatique and Recherche Opérationnelle Université de Montréal.

Genetic Programming. Charles Chilaka. Department of Computational Science Memorial University of Newfoundland

Experimental Comparison of Different Techniques to Generate Adaptive Sequences

The Simple Genetic Algorithm Performance: A Comparative Study on the Operators Combination

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree

Evolutionary Computation. Chao Lan

An Evolutionary Algorithm for the Over-constrained Airport Baggage Sorting Station Assignment Problem

Pre-requisite Material for Course Heuristics and Approximation Algorithms

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

Artificial Intelligence Application (Genetic Algorithm)

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

Sparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

Parameter Control of Genetic Algorithms by Learning and Simulation of Bayesian Networks

Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Software Vulnerability

A Memetic Heuristic for the Co-clustering Problem

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Artificial Intelligence

Hybridization EVOLUTIONARY COMPUTING. Reasons for Hybridization - 1. Naming. Reasons for Hybridization - 3. Reasons for Hybridization - 2

Evolutionary Algorithms and the Cardinality Constrained Portfolio Optimization Problem

Evolving SQL Queries for Data Mining

Foundations of Computing

Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design

Using Genetic Algorithms to Solve the Box Stacking Problem

Machine Evolution. Machine Evolution. Let s look at. Machine Evolution. Machine Evolution. Machine Evolution. Machine Evolution

A Genetic Algorithm for Multiprocessor Task Scheduling

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS

A Web-Based Evolutionary Algorithm Demonstration using the Traveling Salesman Problem

Introduction to Evolutionary Computation

Reducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm

A Parallel Evolutionary Algorithm for Discovery of Decision Rules

One-Point Geometric Crossover

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

Graphical Approach to Solve the Transcendental Equations Salim Akhtar 1 Ms. Manisha Dawra 2

Transcription:

The 13 th International Conference RELIABILITY and STATISTICS in TRANSPORTATION and COMMUNICATION - 2013 Proceedings of the 13 th International Conference Reliability and Statistics in Transportation and Communication (RelStat 13), 16 19 October 2013, Riga, Latvia, p. 155 162. ISBN 978-9984-818-58-0 Transport and Telecommunication Institute, Lomonosova 1, LV-1019, Riga, Latvia EVOLUTIONARY OPTIMIZATION OF A FLOW LINE USED ExtendSim BUILT-IN OPTIMIZER Baiba Zvirgzdiņa, Jurijs Tolujevs Transport and Telecommunication Institute Lomonosova str. 1, LV-1019, Riga, Latvia E-mail: baiba.zvirgzdina@inbox.lv Keywords: flow line simulation, ExtendSim, evolutionary optimization Introduction ExtendSim is a simulation program for modelling discrete event, continuous, agent-based, and discrete rate processes [1]. There are four ExtendSim packages: CP for continuous processes; OR (operations research), which adds discrete event; AT (advanced technology), which adds discrete rate, a number of advanced modelling features, and Stat::Fit for statistical distribution fitting; and Suite, which adds 3D animation. ExtendSim is used for modelling manufacturing, logistics, supply chain, healthcare, communications, defence, environmental, agricultural, biological, energy, reliability, service, information flow, and recreational systems. An open-source evolutionary optimizer is included in all versions of ExtendSim. ExtendSim facilitates optimization by making the optimization algorithm available within a block that can be added to any model to control all aspects of the optimization. Furthermore, having a block do the optimization increases flexibility and opens up the method and source code to users who might want to modify or create their own customized optimization blocks. Optimization, sometimes known as goal seeking, is a useful technique to automatically find the best answer to a problem. Like most optimization algorithms, the ExtendSim Optimizer solves models using an initial population of possible solutions. Each solution is explored by running the model several times using different values for some selected parameters, averaging the samples (for stochastic, random models), and sorting the solutions. The best solution sets of parameters are then used to derive slightly different but possibly better solutions. Each new derived solution is called a generation. This process continues for enough generations until the Optimizer determines that there are probably no better solutions in sight. The Optimizer then terminates the simulation runs and populates the model with the best solutions it has found. Although the code applied in ExtendSim evolutionary optimizer is open, the principles of the internal organization of the optimizer is not described in the customer documentation of package ExtendSim and are offered to work with him as with a black box, for the settings that can use only two parameters: a) the maximum number of runs of the stochastic model with one combination of varying variables and b) the boundary value of the population convergence indicator. About ExtendSim optimizer block is only known that it uses an evolutionary algorithm, similar to genetic algorithms, but because it uses floating point arithmetic, it s better suited to stochastic processes such as simulation. There are also indications that this algorithm is the most similar to the type of algorithm evolution strategies [2]. In writing this paper the authors had two aims: a) to summarize the theoretical basis of the algorithms types evolution strategies, and b) to study experimentally the ability to influence the work of the ExtendSim optimizer when used in conjunction with the classic production line simulation model [3]. Evolution Strategies In [2] is given a comprehensive introduction into one of the main branches of evolutionary computation the evolution strategies (ES) the history of which dates back to the1960s in Germany. An evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based meta heuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem 155

Session 5. Transport and Logistics System Modelling play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators. EA often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape. An ES is an optimization technique based on ideas of adaptation and evolution. It belongs to the general class of evolutionary computationmethodologies.es use natural problem-dependent representations, and primarily mutation and selection, as search operators. In common with EA, the operators are applied in a loop. An iteration of the loop is called a generation. The sequence of generations is continued until a termination criterion is met. Three of the nearly contemporaneous sources of the EA have been kept alive over three decades and experienced an amazing increase of interest during the last fifteen years. Two of them are lying in the United States of America, the source of evolutionary programming (EP) [4], the source of genetic algorithms (GA) [5]. ES is the third main variant of EA, were founded by students at the Technical University of Berlin [6]. There were two rules for designing and evaluating the consecutive experiments ES: 1. Change all variables at a time, mostly slightly and at random. 2. If the new set of variables does not diminish the goodness of the device, keep it, otherwise return to the old status. Rule one seemed to resemble mutations in nature, and rule two modelled the survival of the fittest. The performance of the ES largely depends on the adjustment of the internal parameters, prominently the mutation strength(s). Schwefel introduced two multi membered ES, i.e., the - ES, in which not only one offspring is created at a time or in a generation, but descendants, and, to keep the population size constant, the worst out of all individuals are discarded; the - ES, in which the selection takes place among the offspring only, whereas their parents are forgotten no matter how good or bad their fitness was compared to that of the new generation. Obviously, this strategy relies on a birth surplus, i.e., on. As far as real-valued search spaces are concerned, mutation is normally performed by adding a normally distributed random value to each vector component. The step size or mutation strength (i.e. the standard deviation of the normal distribution) is often governed by self-adaptation. Individual step sizes for each coordinate or correlations between coordinates are either governed by self-adaptation or by covariance matrix adaptation. ES Algorithm The usual goal of an ES is to optimize (some) given objective or quality function(s) F with respect to a set of decision variables or control parameters are often referred to as object parameters can be any set of data structures of finite but not necessarily fixed length. Examples for Y are the realvalued N-dimensional search space R N, the integer search space Z N, the binary search space B N, the space of permutations P N as well as mixtures of different spaces and subspaces (due to constraints). ES operate on populations β of individuals a. An individual a k with index k comprises not only the specific object parameter set (or vector) y k and its objective function value sometimes referred to as fitness, but usually also a set of strategy parameters. The strategy parameters are a peculiarity of ES: they are used to control certain statistical properties of the genetic operators, especially those of the mutation operator. Strategy parameters can evolve during the evolution process and are needed in self-adaptive ES. Within one ES generation step, offspring individuals are generated from the set of µ parent individuals a m. That is, the size of the offspring population β 0 is usually not equal to the size µ of the parent population β P. The strategyspecific parametersµ and as well as (the mixing number) are called exogenous strategy parameters 156

The 13 th International Conference RELIABILITY and STATISTICS in TRANSPORTATION and COMMUNICATION - 2013 which are kept constant during the evolution run. The refers to the number of parents involved in the procreation of one offspring. For (cloning), we have the special ES cases without recombination, usually denoted by and, respectively. All other cases are strategies with recombination, called -ES. This makes them less prone to get stuck in local optima. The refers to the kind of selection used, i.e., + and, -selection, respectively. Selection Each EA needs a goal oriented selection operator in order to guide the search into promising regions of the object parameter space. It gives the evolution a direction. In ES selection only those individuals with promising properties, e.g., high fitness values (objective function values), get a chance of reproduction. That is, a new parental population at is obtained by a deterministic process guaranteeing that only the best individuals a from the selection pool of generation (g) are transferred into. There are two versions of this selection technique, depending on whether or not the parental population at (g) is included in this process, i.e., plus selection, denoted by, and comma selection, denoted by, respectively. In the case of selection, only the newly generated offspring individuals, i.e. the population, define the selection pool. In other words, the parents from generation (g) are forgotten even when they are better than all offspring. The notation indicates that both the parents and the offspring are copied into the selection pool which is therefore of size.plus selection guarantees the survival of the best individual found so far. Since it preserves the best individual such selection techniques are called elitist. Due to the elitism in plus strategies, parents can survive an infinitely long time-span. Both selection variants have their specific application areas. While the selection is recommended for unbounded search spaces Y, especially, the selection should be used in discrete finite size search spaces, e.g., in combinatorial optimization problems. Mutation There is not an established design methodology up to now, but some rules have been proposed by analyzing successful ES implementations and theoretical considerations. Reachability. Given a parental state, the first requirement ensures that any other (finite) state can be reached within a finite number of mutation steps or generations. This is also a necessary condition for proving global convergence. Unbiasedness. Selection exploits the fitness information in order to guide the search into promising search space regions, whereas variation explores the search space, i.e., it should not use any fitness information but the search space information from the parental population. Therefore, there is no preference of any of the selected individuals (parents) in ES. Scalability. The scalability requirement states that the mutation strength or the average length of a mutation step should be tunable in order to adapt to the properties of the fitness landscape. The goal of adaptation is to ensure the evolvability of the ES system, i.e., of the ES algorithm in conjunction with the objective function. Term evolvability expresses the idea that the variations should be generated in such a way that improvement steps are likely, thus building a smooth evolutionary random path through the fitness landscape toward the optimum solution. The smoothness assumption is sometimes expressed in terms of the strong concept stating that small changes on the genetic level should result on average in small changes in the fitness values. Recombination While mutation performs search steps based on the information of only one parent and on its endogenous strategy parameters, recombination shares the information from up to parent individuals. 157

Session 5. Transport and Logistics System Modelling Unlike standard crossover in GA where two parents produce two offspring, the application of the Standard ES recombination operator to a parent family of size produces only one offspring. There are two standard classes of recombination used in ES: discrete recombination sometimes referred to as dominant recombination and the intermediate recombination. Given a parental vector (object or strategy parameter vector), the dominant recombination produces a recombinant by coordinate-wise random selection from the corresponding coordinate values of the parent family with. In other words, the kth component of the recombinant is determined exclusively by the kth component of the randomly (uniformly) chosen parent individual m k that can also be interpreted as dominance. In contrast to discrete (dominant) recombination the intermediate recombination takes all parents equally into account. It simply calculates the centre of mass (centroid) of the parent vectors a m. While this procedure is well defined for real-valued state spaces, the application in discrete spaces needs an additional procedure such as rounding or probabilistic rounding in order to map back onto the discrete domain. Three-Stage Buffer Allocation Problem We propose an optimization-via-simulation algorithm for use when the performance measure is estimated via a stochastic, discrete-event simulation, and the decision variables may be subject to deterministic linear integer constraints. We consider a three-stage flow line with finite buffer storage space in front of stations 2 and 3 and an infinite number of jobs in front of station 1. There is a single server at each station, and the service time at station h is exponentially distributed with rate µ h, h = 1, 2, 3 (see Fig. 1). If the buffer of station h is full, then station h 1 is blocked and a finished job cannot be released from station h 1. The total buffer space and the service rates are limited. The goal is to find a buffer allocation and service rates such that the throughput (average output of the flow line per unit time) is maximized. μ 1 b 2 μ 2 b 3 μ 3 source station 1 buffer 2 station 2 buffer 3 station 3 sink Figure 1. The three-stage flow line as a queuing system Let b h be the number of buffer space at station h, h = 2, 3. The constraints are as follows: Such a three-stage flow line model has been selected in order to ensure an opportunity to compare the results of optimization with those obtained by other authors. In [3] is described exactly the same model of a three-stage flow line. In [7] and [8] optimizations of analogues flow lines models are viewed. Realization of the Model and Optimizer Configuration in the ExtendSim Environment A simulation model realized by using ExtendSim package is shown on Figure 2. All five decision variables are set using blocks of type constant. Blocks of the type equation are used to calculate the mean 158

The 13 th International Conference RELIABILITY and STATISTICS in TRANSPORTATION and COMMUNICATION - 2013 time values of processing time at workstations. In the first version of the model that fully corresponds [3], each value µ h is interpreted as intensiveness of processing (rate), but mean time value is calculated by the formula: τ h = 1 / µ h This version of the model has disadvantage that optimization problem is obtained with not clearly expressed (strict) extreme, as due to the homogeneity of workstations set of values (µ 1, µ 2, µ 3 ) = (7, 7, 6) can be as optimal as (µ 1, µ 2, µ 3 ) = (6, 7, 7). For this reason, the second variant of the model was explored in which each workstation has its own average processing time on a every single machine (t 1, t 2, t 3 ) = (0,333; 0,5; 0,2), and the value of mean time for the whole station is calculated by the formula τ h = t h / µ h, where the parameter µ h is interpreted as the number of parallel working machines [7]. Table 1 shows the values τ h that correspond to optimization values µ h. The following steps are needed to optimize the simulation model of a three-stage flow line [9]: 1) Add an Optimizer block (Value library) to a model. 2) Define the form of the objective function. 3) Determine which variables the equation needs and clone-drop them onto the Optimizer. 4) Set the limits for those variables in the Optimizer s Variables table. 5) Derive the equations for the objective function. 6) If variables need to be constrained to certain values, add constraint equations. 7) Set the Optimizer s Run Parameters for a random model, and then run the optimization. Table 1. The values of mean time, calculated in the blocks equation Model Variant 1 [3] rate µ h [1/h] 1 2 3 4 5 6 7 8 9 10 mean time τ h [h] 1 0,5 0,333 0,25 0,2 0,167 0,143 0,125 0,111 0,1 Model Variant 2 [7] number of machines µ h 1 2 3 4 5 6 7 8 9 10 mean time 1 τ 1 [h] 0,333 0,166 0,111 0,083 0,067 0,055 0,048 0,042 0,037 0,033 mean time 2 τ 2 [h] 0,5 0,25 0,167 0,125 0,1 0,083 0,071 0,063 0,056 0,05 mean time 3 τ 3 [h] 0,2 0,1 0,067 0,05 0,04 0,033 0,029 0,025 0,022 0,02 Figure 2. The three-stage flow line as an ExtendSim simulation model Figure 3 shows the results of the Steps 1-5. For all decision values are defined boundary values: Minimum Unit = 1 and Maximum Unit = 20. Variable Var6 indicates the value of average output of the flow line per unit time, that is read by the connector TP of the information block. Expression MaxProfit = Var6 transforms variable Var6 into objective function, that is subject to maximization. 159

Session 5. Transport and Logistics System Modelling Figure 3. Determination of decision variables and objective function for the optimization problem The results of the Step 6 are shown on Figure 4. In the global constraints form are written inequalities that are shown above in the mathematical formulation of the optimization problem. Figure 4. Definition of constraint equations Following Optimizer s Run Parameters must be given in the Step 7: Maximum Samples per Case: This is the maximum number of runs averaged to get a member result. The Optimizer block starts the number of samples at 1 and increases them with each generation until it reaches the maximum. Sometimes it is useful to reduce the maximum number of samples (possibly to 5), to get a rough idea of the best solution without spending too much time running samples. Member Population Size: How many members are in the evolutionary population. Check Convergence after "n" cases: How many cases to wait before checking for convergence. This prevents premature false convergence by insuring that a minimum number of cases (generations) are run. Terminate optimization after "n" cases: Sets a maximum on the number of cases (generations) that the optimizer will run. Terminate if best and worst within: If this option is selected, you can specify how close to optimum you want the results to approach. This value is used to examine the current population of results to see if they are within a specified percentage of each other. Concrete values of these parameters under which the optimization model was performed are shown in Table 2. Experiments to Model Optimization ExtendSim block Optimizer offers two standard set of values for the Run Parameters: Quicker Defaults and Better Defaults. Variant Quicker Defaults is used in experiments No. 1, 2, 3 and 4, but variant Better Defaults in experiments No. 0, 5 and 6 (see Table 2). Experiment No. 0 was performed with Model Variant 1 only to verify the model. The optimization results are almost identical to results shown in [3]. Experiments No. 1, 2, 3, 4, 5 and 6 were performed with Model Variant 2. 160

The 13 th International Conference RELIABILITY and STATISTICS in TRANSPORTATION and COMMUNICATION - 2013 Table 2. Optimizer s run parameters and optimization results Optimizer s Run Parameters Optimization results Experiment No. Maximum Samples per Case Member Population Size Check Convergence after "n" cases Terminate optimization after "n" cases Terminate if best and worst within MaxProfit [1/h] Convergence [%] Total cases (generations) Total samples (simulation runs) Values of decision variables µ 1, µ 2, µ 3, b 2, b 3 Model Variant 1 0 100 10 100 1000 0,995 5,80 99,55 168 4362 6, 7, 7, 8, 12 Model Variant 2 1 5 10 1000 0,95 17,18 95,12 62 360 7, 9, 4, 7, 13 2 100 10 1000 0,95 15,08 98,28 51 530 6, 8, 5, 5, 15 3 5 10 1000 0,99 17,14 99,00 255 1570 6, 9, 5, 10, 10 4 5 20 1000 0,95 16,56 95,02 105 603 7, 9, 4, 3, 17 5 100 10 100 1000 0,995 17,26 99,25 193 56 6, 10, 4, 14, 6 6 100 20 100 1000 0,995 17,44 98,45 201 2104 6, 10, 4, 11, 9 Each optimization procedure starts with decision variables values (µ 1, µ 2, µ 3, b 2, b 3 ) = (6, 6, 6, 10, 10). The number of feasible solution is 21660. The optimal solution for the Model Variant 2 is (µ 1, µ 2, µ 3, b 2, b 3 )= =(6, 10, 4, 11, 9) with an expected throughput of 17,44 jobs per unit time. As it is seen from Table 2, this solution was obtained only once in Experiment No. 6. Fundamentally important relationship µ 2 > µ 1 > µ 3 is fulfilled in all experiments. This indicates a tendency to make throughput for all three stations approximately the same for a given condition t 2 > t 1 > t 3. As on Figure 5 the output variables of the model MaxProfit (throughput) and Convergence are shown. 17,18972 95,12139 15,1102 98,83967 16,53181 89,48122 15,02147 92,73472 15,87391 83,84104 14,93274 86,62976 15,21601 78,20087 14,84402 80,5248 14,5581 72,5607 14,75529 74,41984 13,9002 13,2423 66,92052 14,66657 Experiment No. 1 Experiment No. 2 61,28035 14,57784 68,31488 62,20992 12,5844 55,64017 14,48911 56,10496 11,92649 14,40039 0 15,5 31 46,5 62 0 12,75 25,5 38,25 51 1 1 17,14577 99,00885 16,56867 95,02484 16,37901 92,88274 16,20395 89,39674 15,61225 86,75664 15,83924 83,76863 14,84548 80,63053 15,47452 78,14053 14,07872 74,442 15,1098 72,51242 13,31195 12,54519 68,37832 14,748 Experiment No. 3 Experiment No. 4 62,25221 14,38037 66,88432 61,25621 11,77843 56,12611 14,01565 55,62811 11,01166 13,693 0 63,75 127,5 191,25 255 0 26,25 52,5 78,75 105 1 1 17,26142 99,48871 17,44442 98,77542 16,33139 93,30262 17,23446 92,67849 15,40135 87,11653 17,0245 86,58156 14,47132 80,93044 16,81453 80,48464 13,54128 74,74435 16,60457 74,38771 12,61125 11,68121 68,55827 16,39461 Experiment No. 5 Experiment No. 6 62,37218 16,18464 68,29078 62,19385 10,75118 56,18609 15,97468 56,09693 9,821145 0 48 96 144 192 1 15,76472 0 100 1 200 1 Figure 5. graphical charts for experiments with Model Variant 2 161

Session 5. Transport and Logistics System Modelling Conclusions The evolutionary algorithm used in the optimizer ExtendSim, showed satisfactory results in the optimization model of a three-stage flow line that belongs to a class stochastic, discrete-event simulation. Peculiarity of the optimization problem lay in the fact that all five decision variables were nonnegative integers and for them were formulated linear constraints. In cases where the Optimizer s Run Parameters received values according to Quicker Defaults strategy the algorithm led only to suboptimal solutions. Also according to strategy Better Defaults optimal solution could be obtained only after the increase of the parameter Member Population Size from 10 to 20. The optimal solution was not obtained even in Experiment No. 5, when the amount of Total samples (simulation runs) has reached the value 56. A wide variety of graphical charts, shown on Figure 5, is the result of high stochasticity level of both the model of a three-stage flow line as well as the evolutionary optimization algorithm. A rule, which recommends repeating the optimization procedure also for different initial values of the decision variables, remains in force. References 1. Krahl, D. (2002). The Extend simulation environment. In Proceedings of the 2002 Winter Simulation Conference, pp. 205-213. 2. Beyer, H., Schwefel, H. (2002). Evolution strategies a comprehensive introduction. Natural computing, 1(1) 3-52. 3. Pichitlamken, P., Nelson, B.L. (2002). Optimization via simulation: a combined procedure for optimization via simulation. In Proceedings of the 2002 Winter Simulation Conference, pp. 292-300. 4. Fogel, L.J. (1962). Autonomous automata. Industrial Research, 4, 14 19. 5. Holland, J.H. (1962). Outline for a logical theory of adaptive systems. JACM, 9, 297-314. 6. Schwefel, H.-P. (1965). Kybernetische Evolution als Strategie der exprimentellen Forschung in der Strömungstechnik: Master s thesis. Berlin: Technical University of Berlin. 7. Law, A.M., McComas, M.G. (2000). Simulation-based optimization. In Proceedings of the 2000 Winter Simulation Conference, pp. 46-49. 8. Buchholz, P., Thümmler, A. (2005). Enhancing evolutionary algorithms with statistical selection procedures for simulation optimization. In Proceedings of the 2005 Winter Simulation Conference, pp. 843-852. 9. ExtendSim User Guide. Release 7, Imagine That, Incorporated, 2007. 162