Analysis of Evolutionary Algorithms in the Control of Path Planning Problems

Size: px
Start display at page:

Download "Analysis of Evolutionary Algorithms in the Control of Path Planning Problems"

Transcription

1 Wright State Universit CORE Scholar Browse all Theses and Dissertations Theses and Dissertations 218 Analsis of Evolutionar Algorithms in the Control of Path Planning Problems Pavlos Androulakakis Wright State Universit Follow this and additional works at: Part of the Electrical and Computer Engineering Commons Repositor Citation Androulakakis, Pavlos, "Analsis of Evolutionar Algorithms in the Control of Path Planning Problems" (218). Browse all Theses and Dissertations This Thesis is brought to ou for free and open access b the Theses and Dissertations at CORE Scholar. It has been accepted for inclusion in Browse all Theses and Dissertations b an authorized administrator of CORE Scholar. For more information, please contact corescholar@ librar-corescholar@wright.edu.

2 Analsis of Evolutionar Algorithms in the Control of Path Planning Problems A Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering b Pavlos Androulakakis B.S.E.E., Ohio State Universit, Wright State Universit

3 Wright State Universit GRADUATE SCHOOL August 31, 218 I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY SUPER- VISION BY Pavlos Androulakakis ENTITLED Analsis of Evolutionar Algorithms in the Control of Path Planning Problems BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in Electrical Engineering. Zachariah Fuchs, Ph.D. Thesis Director Committee on Final Examination Brian Rigling, Ph.D. Chair, Department of Electrical Engineering John C. Gallagher, Ph.D. Luther Palmer, Ph.D. Zachariah Fuchs, Ph.D. Barr Milligan, Ph.D. Interim Dean of the Graduate School

4 ABSTRACT Androulakakis, Pavlos. M.S.E.E., Department of Electrical Engineering, Wright State Universit, 218. Analsis of Evolutionar Algorithms in the Control of Path Planning Problems. The purpose of this thesis is to examine the abilit of evolutionar algorithms (EAs) to develop near optimal solutions to three different path planning control problems. First, we begin b examining the evolution of an open-loop controller for the turn-circle intercept problem. We then extend the evolutionar methodolog to develop a solution to the closedloop Dubins Vehicle problem. Finall, we attempt to evolve a closed-loop solution to the turn constrained pursuit evasion problem. For each of the presented problems, a custom controller representation is used. The goal of using custom controller representations (as opposed to more standard techniques such as neural networks) is to show that simple representations can be ver effective if problem specific knowledge is used. All of the custom controller representations described in this thesis can be easil implemented in an modern programming language without an extra toolboxes or libraries. A standard EA is used to evolve populations of these custom controllers in an attempt to generate near optimal solutions. The evolutionar framework as well as the process of mixing and mutation is described in detail for each of the custom controller representations. In the problems where an analticall optimal solution exists, the resulting evolved controllers are compared to the known optimal solutions so that we can quantif the EA s performance. A breakdown of the evolution as well as plots of the resulting evolved trajectories are shown for each of the analzed problems. iii

5 Contents 1 Introduction Traditional Solutions Evolutionar Algorithms Thesis Overview Open-loop evolution Turn-Circle Intercept Control Problem Description Sstem Model Controller Design Utilit Problem Definition Evolutionar Architecture Controller Encoding Initial Population Crossing Mutation Results and Analsis Basic Air Combat Tactics Evolved Lag Pursuit Evolved Lead Pursuit Intercept Trajector Impact of Results Closed-loop evolution Dubins Vehicle Problem Description Sstem Model Terminal Conditions Game Utilit Controller Design Evolutionar Architecture iv

6 Evaluation Fitness Creating Next Generation Copies of Elite Controllers Fitness-Based Crossing of Two Controllers Results and Analsis Dubins Vehicle Optimal Solution Analsis of Evolution Results of Evolutionar Algorithm Further Analsis Impact of Results Pursuit Evasion Problem Description Sstem Model Relative Coordinates Terminal Conditions Game Utilit Controller Design Evolutionar Architecture Competition Fitness Creating Next Generation Copies of Elite Controllers Mutations of Elite Controllers Crossing of Two Random Controllers Random New Controllers Numerical Results and Analsis Untrained Starting Positions Impact of Results Conclusion and Future Work Conclusion Future Work Bibliograph 86 v

7 List of Figures 2.1 Global Coordinates Desired Separation Creating Next Generation Crossing Examples Lead Pursuit Lag Pursuit Utilit Summar Evaluation of Evolution Evolved 3 Segment Lag Pursuit Evolved Lead Pursuit (7 Seg) Evolved Lead Pursuit (3 Seg) Evolved 3 Segment Lead Pursuit ( ) ρ A = π Evolved Intercept Trajector (7 Segments) Evolved Intercept Trajector (3 Segments) Global Coordinates Relative Coordinates Buffer Zones Terminal Condition X 1 Example Example of a 4x4x4 Grid Controller Matrix form of the 4x4x4 Grid Controller in Figure Initial Conditions Population Generation Method Turn-Straight-Turn (TST) Solution Turn-Turn-Turn (TTT) Solution Fitness Throughout the Evolution Fitness and Success from Gen 1 to Fitness and Success Gen 5 to Summar of Successful Captures Generation Generation Generation Generation vi

8 3.19 Generation Generation Performance of Best Controller from Trained IC Evolved TTT Solution vs Optimal TTT Solution Evolved TST Solution vs Optimal TST Solution Performance from Untrained IC Example of Grid Parameterization Error Evolved vs Discretized Dubins Global Coordinates Example plaer Relative Coordinates Example of capture conditions Three Anchor Points Subdividing a 2D Space Initial Conditions (not to scale) Averaging of sub-fitnesses into categor fitnesses and cumulative fitness Population Generation Method Fitness over 5 generations Performance of Best Controller in Generation Performance of Best Controller in Generation Performance of Best Controller in Generation Performance of Best Controller in Generation Performance of Best Controller in Generation Anchor Points of Best Controller in Generation Best Controller Against Aggressive Opponent in Untrained Initial Conditions Best Controller Against Alwas Straight Opponent in Untrained Initial Conditions Plaer A Gets Captured b Plaer B from Untrained Initial Conditions vii

9 Acknowledgments I would like to take this opportunit to extend m thanks to Dr. Fuchs. His support and guidance were integral parts of making this thesis possible. I would also like to thank Dr. Gallagher and Dr. Palmer for taking the time to be part of m committee. viii

10 Dedicated to Stavros and Voula Androulakakis ix

11 Introduction Autonomous sstems are becoming increasingl prevalent in man aspects of modern life. From militar applications and space exploration, to self driving cars and logistics planning, these sstems are an integral part of our advanced technologies. Path planning in particular is a challenging task faced b most autonomous sstems. Driverless cars and UAVs must maneuver through their environments in a timel manner while satisfing dnamic and energ constraints. In cases where multiple agents are involved, teamwork and adversarial tactics come into pla. Often times these tpes of problems can be solved using analtical methods. However as the problems become more complex, this becomes difficult. Developing a numeric methodolog that is able to generate effective controllers for these tpes of problems would be ver useful to future control research. This thesis examines the abilit of one such numeric method, evolutionar algorithms, to develop near optimal solutions to an arra of control problems. 1.1 Traditional Solutions There is a long histor of using analtic optimal control methods for analzing path planning problems. For example, the case in which a single turn constrained attacker is pursuing a stationar target is referred to as the Dubins Vehicle problem and has been analticall solved b Lester E. Dubins [1]. More complex scenarios in which a mobile target is able to move around and act in direct opposition to the attacker have been examined from a game 1

12 theoretic perspective and solved as shown in [2], [3], [4]. Although analtic methods provide the true equilibrium solution, it ma not alwas be possible to analticall derive optimal solutions for realistic problems with highl nonlinear dnamics. Additionall, classical optimization techniques ma not scale well for high dimensional sstems. In these situations, numerical optimizations techniques provide an effective method of getting near optimal solutions. Evolutionar algorithms are particularl useful for searching these large and often complex solution spaces. 1.2 Evolutionar Algorithms Evolutionar algorithms (EAs) use the principles of natural selection and fitness to evolve a population of candidate solutions over a series of generations. These algorithms have been successfull implemented in a wide range of optimization problems. For example, in [5] the authors used an evolutionar algorithm to optimize air traffic control patterns. In [6], the authors used a combination of two evolutionar algorithms to develop control schemes for robots with complex locomotor sstems. The focus of this thesis is on the application of evolutionar algorithms to path planning problems. In general, there are two tpes of solutions that can be developed; open-loop solutions and closed-loop solutions. Open-loop solutions take in an initial state and will return a control for an requested time u = f(x, t). For example, [7] uses an advanced evolutionar algorithm to evolve a sequence of wapoints that an agent follows to help navigate an obstacle rich environment. The process of evolving open-loop solutions is extremel effective at solving for a near optimal solution from an given initial condition, however the resulting evolved controller is onl valid for that specific initial condition. If the solution from a different initial condition is desired, then the evolutionar algorithm would need to be rerun from the new location. This can be a problem for fast moving real world sstems. The authors of [8] 2

13 address this problem b implementing a fast evolutionar algorithm on an FPGA that is able to develop new open-loop solutions for a UAV in under 1ms. This allows them to constantl update their trajector b using their current state as a new initial condition. As shown in their paper, this method allows them to effectivel control the UAV. While these tpes of online open-loop solutions can be ver effective, the are limited b on-board computational power and algorithm complexit. Closed loop solutions are able to address this shortcoming at the cost of added complexit in controller representation. Unlike open-loop solutions, closed-loop solutions are a function of our current state u = f(x(t)) rather than our initial state u = f(x, t). Instead of developing a solution from one initial condition, a closed-loop solution is able to provide a solution for all admissible states. Theoreticall this allows us to evolve a feedback controller once and have a solution for all admissible states. The sstem implementing the feedback controller would then have the flexibilit to update its control as fast as it can evaluate u = f(x(t)). In order to evolve a feedback controller, we will need to be able to parameterize it. One wa to do this is to directl evolve the parameters of existing feedback control methods. The authors of [9] use this method to evolve the coefficients of a PID controller. Another method of parameterization that has drawn the focus of man researchers is the neural network [1], [11], [12], [13], [14]. Due to their universal approximation propert, neural networks are theoreticall able to represent an possible controller. Since evolutionar algorithms are exploring large spaces for an unknown solution, being able to guarantee that no solution is unrepresentable b the controller representation (in this case a neural network) is ver important. The drawback with neural networks is the relative complexit of their implementation and evolution. Man toolboxes and software libraries have been developed that make this process easier, but the complexit of the fundamental design choices are still present. With enough experience and knowledge neural networks are a ver powerful tool, but in some problems the ma be more than is necessar. This thesis explores alternative controller representations that are custom built for the 3

14 problem at hand. These representations range from something as simple as a look up table to slightl more sophisticated methods such as a nearest neighbor kd search tree. The goal of using these non-standard representations is to show that even though these new representations are simple, the are still able to develop near optimal solutions. Additionall, these controllers can be implemented in most modern programming languages with no additional libraries or toolboxes. 1.3 Thesis Overview This thesis is broken up into four chapters. Chapter 1 contains the introduction and a brief overview of current research in this area. Chapter 2 focuses on a standard openloop path planning control problem that will allow us to validate the EA methodolog. The research shown in Chapter 2 focuses on the turn circle intercept problem and was presented at the 218 IEEE World Congress on Evolutionar Computation. Chapter 3 focuses on two closed-loop path planning control problems. The first of these problems is the closedloop Dubins Vehicle problem. The Dubins Vehicle problem has a well defined empirical solution that is used to validate our closed-loop evolutionar algorithm. The second of the two problems is the more complex Pursuit Evasion problem. This problem is the most challenging test of our methodolog and serves as an example of how an EA can be used to generate solutions to problems in which the target is moving with unknown behavior. The research for this problem was presented at the 217 IEEE Congress on Evolutionar Computation. Chapter 4 contains the conclusion and ideas for future work. 4

15 Open-loop evolution In this section we attempt to evolve an open-loop controller. Open loop controllers are defined as controllers that do not have an state feedback. In other words, open-loop controllers do not use the state of the sstem in the computation of the control. This makes the encoding of the controller for the purposes of crossing and mutation in an evolutionar algorithm relativel eas. The goal of this chapter is to validate the evolutionar framework b testing it on a relativel basic control problem in which the optimal solution is know. 2.1 Turn-Circle Intercept Control This section focuses on the turn-circle intercept problem. This problem is a fundamental part of air-to-air combat and formation maintenance problems. Due to its relative simplicit and wide set of applications, this problem has been heavil researched and man strategies and tactics for how pilots should respond based on the relative configuration of the Attacker and Target have been empiricall developed [15]. Lead and Lag pursuit represent two of the most fundamental techniques and can be used to maneuver an attacker into a desired position in minimum time. The goal of this section is to use an evolutionar algorithm to create an open-loop controller that is able to reproduce these empiricall optimal solutions. 5

16 θ θ Figure 2.1: Global Coordinates Problem Description The problem considers two agents, the Attacker and Target, moving about an obstaclefree, two-dimensional plane. The Target moves with constant speed and turn-rate, which results in a circular trajector with fixed turn-radius and center. The Attacker also moves with constant speed, but is free to choose its turn-rate in an effort to capture the Target in minimum time. Successful capture occurs when the Attacker maneuvers into a position behind the Target on the same turn circle. The objective of the problem is to identif a control strateg for the Attacker that achieves capture in minimum time. Sstem Model The Attacker s state is defined b its position, (x A, A ), and heading angle, θ A. Similarl, the Target s state is defined b its position, (x T, T ), and heading angle, θ T. A summar of this two agent sstem is shown in Figure 2.1. The complete state of the sstem, x, will be referred to as the global state and is defined as the collection of the individual agent states 6

17 as well as a time state τ: x := (x A, A, θ A, x T, T, θ T, τ). (2.1) The sstem dnamics ẋ := f(x, u A, u T ) are defined b a sstem of seven ordinar differential equations: ẋ A := v A cos(θ A ) (2.2) ẏ A := v A sin(θ A ) (2.3) θ A := v A ρ A u A (2.4) ẋ T := v T cos(θ T ) (2.5) ẏ T := v T sin(θ T ) (2.6) θ T := v T ρ T u T (2.7) τ := 1, (2.8) where the constants v A > and v T > are the Attacker and Target s respective speeds. The constants ρ A > and ρ T > are the Attacker and Target s turn radii. The Attacker controls its heading through u A [ 1, 1], and the Target controls its heading through u T [ 1, 1]. We define the state at initial time t as x(t ) = x := (x A, A, θ A, x T, T, θ T, τ ). Controller Design In the turn circle intercept problem, the target is assumed to be constantl turning in a circle. To accomplish this, we arbitraril assign the Target a constant turn rate of u T (t) = 1 (turn right). Using the assumed control strateg and initial condition, the Target s trajector 7

18 can be calculated b integrating the dnamics (2.5)-(2.7) with respect to time: x T (t; x ) = x T ρ T sin θ T + ρ T sin(θ T + v T ρ T t) (2.9) T (t; x ) = T + ρ T cos θ T ρ T cos(θ T + v T ρ T t) (2.1) θ T (t; x ) = θ T + v T ρ T t. (2.11) This results in a circular trajector with a center located at (x c, c ) := (x T ρ T sin θ T, T + ρ T cos θ T ) (2.12) and a radius of ρ T. Previous analsis of problems with similar dnamics [16, 17, 18] has shown that the optimal control strategies tpicall possess a bang-zero-bang structure, in which the agent implements either a hard left-turn u A = 1, no turn u A =, or a hard right-turn u A = 1. Therefore, we represent the Attacker s control strateg as a piecewise constant function of time consisting of N segments: u 1 t < t 1 u A (t; C A ) = u 2 t 1 t < t 2 u 3 t 2 t < t 3.. u N t N 1 t < t N t N t, (2.13) where the parameter set C A := {{u 1, u 2,..., u N }, {t 1, t 2,..., t N }} contains the control, 8

19 u i { 1,, 1}, and segment times, t i. The control structure assumes that t 1 t 2 t N. (2.14) Substituting the Attacker s control into the Attacker dnamics (2.2)-(2.4) and integrating provides the Attacker s trajector as a function of time: x A (t; x, C A ) = x Ai ρ A sin θ Ai + ρ A sin(θ Ai + v A ρ A (t t i )) u i+1 = 1 x Ai + v A cos(θ Ai )(t t i ) u i+1 = x Ai + ρ A sin θ Ai ρ A sin(θ Ai v A ρ A (t t i )) u i+1 = 1 (2.15) A (t; x, C A ) = Ai + ρ A cos θ Ai ρ A cos(θ Ai + v A ρ A (t t i )) u i+1 = 1 Ai + v A sin(θ Ai )(t t i ) u i+1 = Ai ρ A cos θ Ai + ρ A cos(θ Ai v A ρ A (t t i )) u i+1 = 1 (2.16) θ A (t; x, C A ) = θ Ai + v A ρ A (t t i ) u i+1 = 1 θ Ai u i+1 = θ Ai v A ρ A (t t i ) u i+1 = 1 (2.17) where the index i satisfies i = arg max i t i < t and the intermediate state components are computed recursivel as x Ai = x A (t i ; x, C A ), Ai = A (t i ; x, C A ), and θ Ai = θ A (t ; x, C A ). The initial conditions are defined as x A (t ; x, C A ) = x A, A (t ; x, C A ) = A, and θ A (t ; x, C A ) = θ A. 9

20 ) Target Figure 2.2: Desired Separation Utilit Given an initial condition x, control parameter set C A, and terminal time t f, the final sstem state, x f, is computed using the trajectories defined in (2.9)-(2.11) and (2.15)-(2.17): x f (C A, t f ) := (x Tf, Tf, θ Tf, x Af, Af, θ Af, t f ) (2.18) = (x T (t f ; x ), T (t f ; x ), θ T (t f ; x ), x A (t f ; x, C A ), A (t f ; x, C A ), θ A (t f ; x, C A ), t f ). (2.19) For this problem, the terminal time is assumed to be the maximum time of the final control segment: t f = t N. The Attacker strives to maneuver into a position behind the Target, which is both on the Target s turn circle and at a desired separation distance as shown in Figure 2.2. The desired separation between the Target and Attacker is defined in terms of angle ᾱ. Using 1

21 these conditions, the desired (x,) coordinates of the Attacker can be expressed as x = (x Tf x c ) cos(ᾱ) ( Tf c ) sin(ᾱ) + x c (2.2) ȳ = ( Tf c ) cos(ᾱ) + (x Tf x c ) sin(ᾱ) + c, (2.21) where x c and c are the coordinates of the center of the Target s turn circle as defined in (2.12). We define an error function in terms of the Attacker s terminal distance from the desired position: h 1 (x f ; ᾱ) := ( x x Af ) 2 + (ȳ Af ) 2. (2.22) An additional constraint on the Attacker s terminal heading is imposed to require tangential motion to the Target s turn circle: h 2 (x f ) := (2.23) (cos(θaf) cos(θ Tf + ᾱ)) 2 + (sin(θ Af) sin(θ Tf + ᾱ)) 2. Since the Attacker strives to minimize the total elapsed time to capture, we define a time based utilit function that consists of the total elapsed time of all control segments: h 3 (x f ; C A ) = t N (2.24) The overall utilit function is a weighted sum of all three utilities: U(C A ) = w 1 h 1 + w 2 h 2 + w 3 h 3, (2.25) where w 1, w 2, and w 3 are positive weight coefficients. Adjusting the relative magnitudes of the weights prioritizes satisfing the terminal constraints or minimizing total time 11

22 Problem Definition Using the overall utilit function (2.25), we can state the trajector optimization problem in terms of a minimization over the Attacker s parameter set: min C A U(C A ). (2.26) The resulting optimal parameter set C A is used to define the optimal Attacker control strateg u A(t) = u A (t; C A). The optimization problem defined in (2.26) contains both integer, {u 1, u 2,..., u N }, and continuous variables, {t 1, t 2,..., t N }, which can be ver challenging to solve using traditional optimization techniques. In Section 2.1.2, we present an evolutionar algorithm method for solving this optimization problem that simultaneousl explores the integer and continuous parameter domains Evolutionar Architecture The Attacker control parameter set, C, contains both integer and continuous values. Additionall, these parameters are closel coupled in how the influence the overall trajector of the Attacker and resulting utilit. In order to simultaneousl explore both the discrete and continuous parameter space, we emplo an evolutionar algorithm. Controller Encoding In order to satisf the ordering constraint (2.14), we do not directl evolve the segment times {t 1, t 2,..., t N }. Instead, we evolve the duration of each segment through the parameters { t 1, t 2,..., t N }, where t i t. The maximum segment duration t is used to upper-bound the search space. When the controller is evaluated, the segment times are computed as t i = i t j. (2.27) j=1 12

23 Elite 5% Fitness Crossing Mutation 95% Old Gen New Gen Figure 2.3: Creating Next Generation This implicitl ensures that t i+1 t i and thus satisfies (2.14). Therefore, a candidate controller C = {{u 1, u 2,..., u N }, {t 1, t 2,..., t N }} is represented as c = {{u 1, u 2,..., u N }, { t 1, t 2,..., t N }} (2.28) within the evolutionar algorithm. The task of the evolutionar algorithm is to evolve c in order to optimize (2.26). Initial Population To begin, we initialize a population of M candidate controllers to serve as our first generation G = {c 1, c 2,...c M }. These candidate controllers are initialized with random control values uniforml drawn from the set { 1,, 1}, p(u i ) = 1/3, and time values uniforml selected from the range [, t max ]. After generating the initial population, G, the evolutionar algorithm will create generation G 1 with population assigned as shown in Figure 2.3. The top 5% of the old generation are considered elite and are passed on to the next generation without an change. This ensures that the top performing controllers persist through to the next generation and don t accidentall get degraded b the crossing and mutation operations. The remaining 13

24 95% of the next generation is filled with the results of crossing and mutation. This process will repeat until we reach a predefined number of generations and end with generation G f. Crossing Crossing begins b uniforml selecting two parent controllers, c A and c B, from the previous generation. The child controller c C is created b crossing each segment of the parents in a two step process. First, a segment control value is uniforml selected from the parents for each segment:.5 u Ai p(u ci ) =. (2.29).5 u Bi Second, a new time is uniforml selected for each segment from a range defined b the parent segment times: p( t ci ) = 1 t max t min t ci [ t min, t max ] otherwise, (2.3) where t min = max(,.5( t Ai + t Bi ) γ 1 t Ai t Bi ) (2.31) t max = min( t,.5( t Ai + t Bi ) + γ 1 t Ai t Bi ). (2.32) The lower and upper bounds defined b (2.31) and (2.32) ensure that the child times still satisf the parameter bounds. The growth parameter γ 1.5 introduces a mutation factor that allows the parameters to explore values beond the bounds imposed b the parent parameters. Figure 2.4 shows an example of distribution bounds for three different values of t Ai and t Bi. The closer t Ai and t Bi are, the smaller the range that the child s 14

25 Figure 2.4: Crossing Examples corresponding time parameter will be selected from. Mutation After a child parameter set is produced, random mutations are applied in order to explore the parameter space. Each element of the control set is selected for mutation with likelihood µ 1. When a mutation occurs, a new control value is selected uniforml from the set { 1,, 1}. The resulting distribution for control elements within the mutated control set {u m1, u m2,..., u mn } is 1 µ 1 u mi = u ci p(u mi ) = µ 1 u 3 mi { 1,, 1}. (2.33) Each element of the time duration set is selected for mutation with likelihood µ 2. When a mutation occurs for a time value t ci, a new value is randoml selected in the range [ t min, t max ], where t min = max(, t ci σ) and t max = min( t, t ci + σ). The parameter σ is the mutation magnitude. The resulting distribution for each duration 15

26 element of the mutated set { t m1, t m2,..., t mn } is p( t mi ) = 1 µ 2 t mi = t ci µ 2 t max t min t mi [ t min, t max ]. (2.34) 16

27 2.1.3 Results and Analsis The results in this section are obtained b evolving a population of M = 15 candidate controllers over 5 generations with the parameters shown below. Several parameter sets were tested and these represent just one of the man possible parameter sets that will result in successful evolution. Velocit: v A = v B = 1 Maximum Turn-Rate: ρ A = ρ B = π 7 Desired Separation: ᾱ = π 8 Mutation Chance: µ 1 = µ 2 =.1% Mutation Severit: σ =.1 Utilit Weights: w 1 = 1 w 2 = 1 w 3 = 1 In order to ensure that the resulting candidate controller is among the best solutions obtainable b our algorithm, 1 independent evolutions are performed. Since the focus of this thesis is on the behavior of the resulting solution rather than the statistical performance of the algorithm, onl the evolution that results in the candidate controller with the lowest overall utilit is used in the analsis. Basic Air Combat Tactics Before examining the results of the evolutionar algorithm, we introduce three basic air combat pursuit tactics; Lead Pursuit, Lag Pursuit, and Pure Pursuit. These general tactics are used to adjust an agent s position relative to a target or wingman without needing to adjust throttle or speed [15]. In order to evaluate the performance of the evolutionar algorithm, we evolve controllers for initial conditions that would traditionall implement 17

28 Figure 2.5: Lead Pursuit one of these tactics. Although we did not explicitl impose these concepts into the structure of the Attacker s controller, the optimized control strategies produced b the evolutionar algorithm exhibit qualitativel similar behaviors. Lead pursuit allows the Attacker to close the distance between itself and the Target b cutting inside the Target s turn circle and traveling a shorter path. Figure 2.5 shows an example trajector in which lead pursuit is implemented. The markings along the trajector show the location of each agent at equal points in time. Lag pursuit allows the Attacker to fall behind the Target b steering its heading outside of the Target s turn circle. Figure 2.6a shows an example trajector in which lag pursuit is implemented. The Attacker increases the distance between itself and the Target b taking a wide turn. In the case where the Attacker starts in front of the Target, this technique can be used to reposition the Attacker behind the target as shown in Figure 2.6b. The case in which the Attacker is exactl on the Target s turn circle is referred to as pure pursuit. Pure pursuit will cause the two agents to move around in a circle together and the relative positions of the two agents to remain 18

29 (a) Attacker Behind (b) Attacker In Front Figure 2.6: Lag Pursuit constant. Evolved Lag Pursuit We begin with the results of evolving a seven segment controller from initial condition x = (1.575,.653, π,,,, ), which represents the case were the Attacker is in front 4 of the Target. According to standard tactics the Attacker should implement lag pursuit in order to fall back to the desired position. As stated in the beginning of this section, 1 independent evolutions were run. These runs had an average final best evolved utilit of with a standard deviation of From these 1 evolutions, the run containing the controller with the lowest utilit in generation 5 was selected for the following analsis. Figure 2.7 shows the minimum utilit achieved in each generation of the evolution. Multiple points of interest have been marked on this plot, each of which correspond to different evolutionar leaps. Figure 2.8 shows the trajector, and control strateg of the best plaer in each of these 19

30 Utilit 55 5 G1 45 G G G43 G136 G Generations Figure 2.7: Utilit Summar marked generations. The trajector plots show the Attacker s trajector in red and the Target s trajector in blue. An x smbol indicates the initial condition. The empt circles represent the transition time between segments of the Attacker s controller. The larger filled circles indicate the final positions. The thick empt black circle represents the desired Attacker location. The control strateg (shown below each trajector) shows how long each control segment was implemented. A value of 1 = Left; = Straight; and 1 = Right. The x and circle markers refer to the corresponding Attacker markers in the trajector plot. In Generation 1, the Attacker randoml moves around the space. This trajector has a total time of seconds and a final position and heading that are not close to the desired state. For these reasons, the overall utilit for this Attacker is ver high at a value of We can see from the control summar, that out of the seven segments, onl five control switches were made. The last three segments were all turn left, which could also have been accomplished b one segment with a time equal to the sum of the three segments times. B taking advantage of this phenomenon, the EA can explore control solutions with lower number of segments. 2

31 Control Control Control Control Control Control x x x Time (a) G1: U(c)=53.99 t f = Time (b) G9: U(c)=39.23 t f = Time (c) G23: U(c)=29.34 t f = x x x Time (d) G43: U(c)=2.5 t f = Time (e) G136: U(c)=14.66 t f = Time (f) G5: U(c)=12.2 t f =1.1 Figure 2.8: Evaluation of Evolution 21

32 In Generation 9 shown in Figure 2.8b, the Attacker has increased its total trajector time to seconds. Although this constitutes an increase in h 3, the final heading is much closer to the desired heading, and the weighted heading utilit, w 2 h 2, is enough to ensure that the overall utilit has been decreased. Additionall, it appears that the control strateg onl contains six segments. While not visible in these plots, segment four has had its time reduced down to.1 seconds and thus is effectivel removed from the control. Once again, this behavior allows the EA to explore lower dimensional solutions and indicates that we ma have more degrees of freedom in our controllers than is necessar. This trend continues into Generation 23 as shown in Figure 2.8c, where we can see that the total time has been reduced to seconds. The final Attacker position has also moved closer to the desired location, further reducing the utilit to Looking at the control summar, the controller has further reduced its effective number of segments down to four. The best controller in Generation 43, shown in Figure 2.8d, removes the extra loop and reduces the time down to seconds. This trajector is beginning to show elements of lag pursuit. We can see that the Attacker begins b turning right and moving its turn circle outside of the targets turn circle. The end of its pursuit however is incorrect. The Attacker stas outside the target s turn circle and is unable to reach the desired state. This lag pursuit path is further refined in Generation 136, shown in Figure 2.8e, where the total time has been reduced to approximatel 1.94 seconds and the final position and heading are now ver close to the desired state. Finall, in Generation 5 shown in Figure 2.8f, the resulting trajector places the Attacker almost exactl in the desired position relative to the Target. In addition to that, the time has been reduced to just over 1 seconds and the final heading is almost exactl tangent to the Defender s turn circle, giving it a utilit of From our initial seven segments, the final evolved control strateg effectivel utilizes four segments. This indicates that the solution to this problem does not require the total number of segments it was originall 22

33 Control x Time Figure 2.9: Evolved 3 Segment Lag Pursuit given. To test this, we reran that evolutionar algorithm with onl three segments. The resulting best controller s trajector and control summar is shown in Figure 2.9. We can see that the this three segment solution is ver similar to our seven segment solution. In fact, the three segment solution performed slightl better b reaching the desired state with an overall time of 9.23 seconds. This shows that the excess segments made solving the problem more complicated since the EA had to search a larger space of solutions and slowl remove the unnecessar dimensions. Even though the evolutionar algorithm was given more segments than necessar, it was able to remove the excess degrees of freedom and find an efficient solution. Evolved Lead Pursuit Next we examine initial condition x := ( 1.575,.653, π,,,, ) which places the 4 Attacker further behind the Target than desired. Standard doctrine states that the Attacker 23

34 Control Control should implement lead pursuit to catch up to the Target. The trajector of the best evolved Attacker using seven segments is shown in Figure 2.1. Of the seven starting segments, the final controller onl effectivel utilizes two. Rerunning the evolutionar algorithm to start with three segments, we can see that a similar solution is found as shown in Figure x x Time Time Figure 2.1: Evolved Lead Pursuit (7 Seg) Figure 2.11: Evolved Lead Pursuit (3 Seg) Lead pursuit is expected from this initial condition; however, the Attacker performs what looks to be a needless loop at the beginning of its trajector. This extra loop is implemented because the Attacker starts on the Target s turn circle and both Attacker and Target have the same turn rate. Therefore, it is not possible for the Attacker to implement lead pursuit from the start because it cannot turn sharp enough to cut into the Target s turn circle as shown in Figure 2.5. In our case if the Attacker started turning left in an attempt to do this, it would simpl follow the target around in pure pursuit. To solve this problem, the evolutionar algorithm found a solution that looped the Attacker around into a position from which it could cut into the Target s turn circle and implement lead pursuit. 24

35 Control x Time Figure 2.12: Evolved 3 Segment Lead Pursuit ( ρ A = π 6 ) When we allow the Attacker to turn faster than the Target, ρ = π, as shown in Figure , the loop is no longer needed and the Attacker is able to implement lead pursuit as soon as it begins. Intercept Trajector In addition to the lead and lag trajectories evaluated in Section and 2.1.3, we also examined an initial condition x := ( 8.99,,,,,, ), (2.35) that is further off of the Target s turn circle in which the Attacker must implement an intercept trajector that times the entr into the Target s turn circle. Figure 2.13 shows the best evolved seven segment trajector. The control summar shows that of our seven segments, onl four are utilized. The Attacker begins turning 25

36 Control Control x x Time Time Figure 2.13: Evolved Intercept Trajector (7 Segments) Figure 2.14: Evolved Intercept Trajector (3 Segments) the same direction as the Target until it completes about 1 th of a full circle. While not 4 immediatel evident, the attacker then begins implementing a lag pursuit strateg. In order for the Attacker to end up behind the Target, it has to move its turn circle further out. We can see that from t = 3.32 to t = 4.9, the Attacker moves straight. This allows the Attacker to push its turn circle outside of the Target s turn circle. After performing the final turn, we can see that the Attacker is indeed behind the Target and able to reach the desired location and heading. Figure 2.14 shows the best evolved three segment trajector. As with our previous results, the three segment evolution developed a solution that is ver similar to the seven segment solution. Instead of using a straight segment to move the attackers turn circle, this three segment solution uses a combination of turn left and turn right. Overall, this result shows the abilit of the evolutionar algorithm to utilize the underling principles of lead and lag pursuit to time its approach and effectivel navigate to the desired state. 26

37 Impact of Results Although we do not explicitl impose the concepts of lead and lag pursuit on the Attacker s control structure, the optimal strategies produced b our EA posses qualitativel similar behaviors. This result provides some validation that evolutionar algorithm is able to generate near optimal results. The custom controller representation used allowed the EA to search solution spaces of differing sizes. It also allowed us to perform crossing and mutation ver simpl. Since the evolved controller is open-loop, we can onl implement the evolved solutions from the initial condition it was evolved from. This is sufficient for our example, but real world sstems will need more robust control schemes. For this reason, we will expand our methods in the next section to attempt to evolve closed-loop solutions. 27

38 Closed-loop evolution With the previous open-loop results in mind, we can move on to developing closed-loop controllers. A closed-loop controller utilizes the state of the sstem it is controlling in the computation of its output. In the case of pursuit problems, this means that the attacker now has knowledge of its own location and the target s location. B utilizing this information we can develop controllers that are much more sophisticated than their open-loop counterparts. However, this added sophistication comes at the cost of complexit. Unlike open-loop controllers, closed-loop controllers must be able to return a control output for all admissible times and all admissible states. This greatl expands the scope of possible solutions and makes representing and evolving a solution with an EA much more difficult. This section will examine two different control problems in which a closed-loop solution is desired. The first problem is the Dubins Vehicle problem. The second problem is the Pursuit Evasion problem. Each of these problems is designed to show how an evolutionar algorithm can be applied to solve closed-loop control problems. 3.1 Dubins Vehicle The first problem we will examine is the Dubins Vehicle problem. We chose this problem because it has an analtical solution that we can use to compare our evolved solutions to. The analtical solution was derived b Lester E Dubins [1]. B comparing our evolved solution to the known optimal, we can validate our closed loop EA methodolog. 28

39 (x D, D ) θ D (x A, A ) θ A x Figure 3.1: Global Coordinates Problem Description The Dubins Vehicle problem considers a single agent moving with constant speed and constrained turn radius about an obstacle free two-dimensional plane. The objective of this section is to design a feedback controller for the agent that moves it from an initial position and heading to a desired final position and heading in minimum time. Sstem Model Much like with the turn circle intercept problem, we begin b defining the state of the sstem. The agent s state is defined b its position, (x A, A ), and heading angle, θ A. In this problem, there is onl one agent so the complete state of the sstem, x G, will be referred to as the global state and is defined as the collection of the agent s state components as well as a time state τ: x G := (x A, A, θ A, τ). (3.1) 29

40 ψ (x,) α Agent AD d ψ D Desired State α Figure 3.2: Relative Coordinates The sstem dnamics ẋ G := f(x G, u) are defined b the following sstem of ordinar differential equations: x A := v cos(θ A ) A := v sin(θ A ) θ A := ρu τ := 1, where the constant v > is the agents s speed, and ρ > is the agents s turning rate. The agent controls its heading through u [ 1, 1]. The desired state is defined as x D := (x D, D, θ D ) where x D, D, and θ D are parameters that define the desired state s position and heading. Figure 3.1 illustrates the agent and a desired state in the two-dimensional x- plane. It is important to note that the desired position and heading are constant parameters that do not change during the course of the simulation. We will now introduce a second coordinate sstem which will represent the location of the agent and desired state relative to the location of the agent. This representation will allow us to reduce the number of dimensions in later analsis as well as represent the terminal conditions in a more concise manner. This new coordinate sstem will be referred to as the relative coordinate sstem. The state of the sstem in relative coordinates x R is 3

41 Position Buffer Angle Buffer β Heading Heading d B θ θ D (x, ) (x D, D ) Angle Buffer γ d B Position Buffer (a) Agent s Buffer Zone (b) Desired State s Buffer Zone Figure 3.3: Buffer Zones defined as x R := (d, ψ, ψ D, τ), (3.2) where the state d represents the distance between the agent and the desired state. The angle ψ represents the agents s relative heading angle and ψ D represents the desired state s relative heading angle. Both angles are measured counter clockwise from the line segment AD. Just as in the global coordinate sstem, we also include a time state τ. The relative coordinate sstem is depicted graphicall in Figure 3.2. Terminal Conditions In the Dubins Vehicle problem, termination is achieved when the agent s state is exactl equal to the desired state. However, for real-world applications, there will alwas be a small amount of error between the current state and the desired state. Additionall, the evolutionar algorithm ma generate candidate controllers that never reach the desired state. Therefore, we consider three possible terminal conditions; successful arrival, maximum distance reached, and maximum time reached. Terminal region X 1 represents the situation where the agent has reached the desired 31

42 Agent Desired State Figure 3.4: Terminal Condition X 1 Example state as shown in Figure 3.4. In this situation, two distinct criteria have been met. First, the desired state is within the buffer zone of the agent (represented b the green region in front of the agent). Second, the agent is within the buffer zone of the desired state (represented b the red region behind the desired state). Using these criteria, the set of states representing successful navigation is defined as X 1 := {x R R 7 d < d b, cos(ψ) cos(β), cos(ψ D ) cos(γ)}, where d b is a positive constant defining the distance buffer and β and γ are positive constants defining the angle buffers. We can see that if d b = β = γ = then the termination condition can onl be met when the agent s state is exactl equal to the desired state. So b setting d c, β, and γ to small nonzero values, we can create a small region around the desired state that the agent can enter to terminate the simulation. The smaller we make these constants, the closer X 1 is to the actual Dubins Vehicle terminal region. In the cases when the agent implements a control that does not reach the desired state, we need alternative termination conditions. Terminal region X 2 represents the scenario in 32

43 which the agent reaches a maximum separation distance, d max, from the desired state. X 2 := {x R R 7 d > d max } This prevents the agent from straing too far from the desired position. The case in which the maximum time t max is reached is contained in terminal region X 3. X 3 := {x R R 7 τ > t max } Together, these termination conditions ensure that the simulation will end regardless of the controller implemented. We define the terminal state as x f = x R (t f ), where the terminal time, t f, is defined as the moment the relative state of the sstem falls into the set X T := X 1 X 2 X 3 Game Utilit The agents s utilit function, U(u(x); x, d), consists of a terminal value function φ(x f ) and a time reward as shown in the following equation. U(u(x); x ) := w 1 φ(x f ) + w 2 (t max τ f ), (3.3) The terminal value function φ(x f ) is defined as φ(x f ) := c 1 x f X 1 c 2 x f X 2 c 3 x f X 3. (3.4) The positive constants w 1 and w 2 are designed to weigh the effects of the terminal value 33

44 function and time reward on the overall utilit. The also include a scaling factor so that the effect of each term will remain proportional to its maximum. The values used in this thesis were w 1 = 7 c 1 and w 2 = 3 t max. With these two weights, the terminal value function can contribute a maximum utilit of 7 and the time reward can contribute a maximum utilit of 3 for a total maximum possible utilit of 1. The terminal value function φ(x f ) is designed to reward the agent for reaching the desired state. In the event the agent is unable to reach the desired state, agents that sta within the maximum distance until time runs out are favored over agents that exceed the maximum distance. In general, the preference for these particular termination conditions is modeled b selecting weight parameters that satisf c 1 > c 3 > c 2. For this research, we used the specific values of c 1 = 1, c 2 =, and c 3 = 25. The terminal value function rewards successful navigation to the desired state, but is not enough to differentiate which controller does it in the shortest amount of time. Therefore, a time reward is used that is based on the elapsed time to complete the navigation τ f. The less time the agent takes to complete the simulation, the higher the reward given. Together with the terminal value function, the time reward ensures controllers that perform fast and efficient navigations to the desired state (like the Dubins Vehicle optimal solution) are rewarded the maximum utilit Controller Design Optimalit analsis of games with similar dnamics [19],[3] have shown that onl the relative configuration information is needed when deciding the optimal control. Through the use of a relative coordinate sstem, we can design a controller that will onl be dependent on ψ, ψ D, and d. We begin developing this controller b defining the boundaries of the state space. The two relative angles, ψ and ψ D, can be an real number, but these angles are mapped to their coterminal angles between and 2π b appling the modulo function. This operation 34

45 exploits the periodic nature of angles and reduces the range of the state space in those dimensions to a finite region. The distance d, as defined in Section 3.1.1, is bounded between and d max. Combining these ranges implies that all the admissible values of our state space can be represented in a finite three dimensional subspace: X C = { ψ 2π, ψ D 2π, d d max }. We can now divide this space into a grid as defined b three parameters, ɛ 1, ɛ 2, and ɛ 3. These parameters represent the resolution of the grid in the ψ, ψ D, and d dimensions respectivel. With this grid, we can now assign a control value to each of the cells to create our feedback controller. Although the control is permitted to be continuous within the range of [ 1, 1] as defined in Section 3.1.1, it has been shown in analsis of differential games and optimal control strategies with similar dnamics [17] that the true optimal control strateg has a bang-zero-bang structure. We also will show in Section that the analtic optimal Dubins solution also has a bang-zero-bang structure. Bang-zero-bang means that the agent will onl implement a hard right, u = 1; go straight, u = ; or hard left, u = 1, control. Therefore, we chose to use discrete control values of either u = -1 (turn right) or u = 1 (turn left) for our grid. While not explicitl included, the go straight control () is realized at the boundaries between the -1 and 1 regions [2, 21]. Figure 3.5 shows an example of a control grid with resolution of ɛ 1 = ɛ 2 = ɛ 3 = 4. The regions in blue represent a control value of -1 (turn right) and the regions in ellow represent a control value of +1 (turn left). This grid can now be used as a feedback controller that has a defined control for ever admissible input state. We can parametrize the controller using the three dimensional ɛ 1 ɛ 2 ɛ 3 matrix u: u = (u i,j,k ) 35

46 d max d 2π 2π ψ D ψ Figure 3.5: Example of a 4x4x4 Grid Controller for i Z : 1 i ɛ 1 j Z : 1 j ɛ 2 k Z : 1 k ɛ 3. A control can then be selected from this matrix in a look-up-table tpe manner. Given a relative state x and control matrix u, the control is defined as u(x; u) = u a,b,c, 36

47 Figure 3.6: Matrix form of the 4x4x4 Grid Controller in Figure 3.5 for ( ) ɛ1 [mod(ψ, 2π)] a = floor + 1 2π ( ) ɛ2 [mod(ψ D, 2π)] b = floor + 1 2π ( ) ɛ3 d c = floor + 1 d max where ψ, ψ D and d are the relative state variables at x and mod(a, b) is the remainder of a b (modulo operation). The matrix form of the illustrative example in Figure 3.5 is shown in Figure 3.6. B using this approach, a control strateg can be completel defined b a control matrix u. This parameterization will allow us to use an evolutionar algorithm to evolve u and optimize our agent s controller for time. 37

48 3.1.3 Evolutionar Architecture To begin the EA, a population of N candidate agent controllers is created to serve as generation, G = {u 1, u 2,..., u N } where u i represents the control matrix as described in the previous section for controller i of the current generation. Each cell in u i is initialized randoml with either a 1 or -1. After generating the initial population, the evolutionar algorithm will go through the following steps to create the next generation G 1. Evaluate Population at Given Initial Conditions Assign Fitness Based on Performance Create Next Generation Through Mixing and Mutation These steps will repeat, creating a new generation each time until the algorithm completes a predefined number of generations, stopping at G f. Evaluation Each candidate controller, u i, is evaluated starting in p different initial conditions. We will define the set of p initial conditions (represented in relative coordinates) as X := {x,1, x,2,..., x,p }. The initial conditions used in this paper are summarized in Figure 3.7. The desired state is held constant at the origin with a heading of zero and the agent is moved around on concentric rings of varing radius from the desired position. These initial conditions can be represented as: X := {(ψ, ψ D, d) ψ ψ, ψ D ψ D, d d} 38

49 x Figure 3.7: Initial Conditions for { ψ = n 2π 5 { ψ D = p 2π 5 d = { 1 + m 9 4 } n Z : n 4 } p Z : n 4 } m Z : m 4. This results is a total of 125 initial conditions. This collection of initial conditions is designed to thoroughl test the candidate controller and ensure that the evolved solutions are effective from man different initial conditions. 39

50 Fitness The evaluation of candidate control matrix u i initiated at x,k provides a utilit which we will refer to as a sub-fitness. This sub-fitness is defined as f s (u i, x,k ) := U(u(x; u)i), x,k ). (3.5) The cumulative fitness of an agent, f c, represents its average sub-fitness from all the given initial conditions, where f c (u i ; X ) := k=1 f s (u i ; x,k ). (3.6) The goal of this optimization problem is to maximize the utilit achieved b the agents s control matrix u in the set of initial conditions X : max u f c(u i ; X ). (3.7) Creating Next Generation After the current generation G i has been evaluated and ever candidate controller has a cumulative fitness assigned to them, the next generation G i+1 is created. As defined in the beginning of this section, G contains N candidate controllers. In order to maintain consistenc, each generation is held to the same constant number of N candidate controllers. The composition of the new N plaers is shown in Figure 3.8 and is described in the following sub-sections. 4

51 Old Gen Elite Next Gen 5% Fitness Fitness Based Crossing Mutation 95% Figure 3.8: Population Generation Method Copies of Elite Controllers The set of elite controllers is comprised of the candidate controllers with cumulative fitness in the top 5% of the generation. These candidate controllers are passed on to the next generation with no mutation. B segregating these elite controllers and passing them on between the generations unaltered, we can guarantee that the maximum cumulative fitness will never go down. Fitness-Based Crossing of Two Controllers The remaining 95% of the next generation is filled with the offspring from fitness-based crossing. The likelihood that a candidate controller is selected for crossing increases proportionall to its relative fitness with respect to the rest of the population. More specificall, for an given candidate controller u n of cumulative fitness f c (u n ; X ) the probabilit of selection p is, p = f c(u n ; X ) N i=1 f c(u i ; X ) 41

52 Once two parent candidate controllers (P 1 and P 2) are selected, the are mixed using the following method. Two points (hereb referred to as split points) s 1 and s 2 are randoml selected in the ɛ 1 ɛ 2 ɛ 3 cube. The child is created b comparing the magnitude of the distance from each matrix index (i, j, and k) to these split points. Matrix indices closer to split point s 1 will be filled with P1 s control value at that index and indices closer to split point s 2 will be filled with P2 s control value at that index. P 1 i,j,k [i, j, k] s 1 [i, j, k] s 2 C i,j,k = P 2 i,j,k [i, j, k] s 1 > [i, j, k] s 2 The resulting child C is then passed through a mutation function where each matrix index C i,j,k has a p m chance of mutating. The mutation chance p m is defined as, p m = 1 ɛ 1 ɛ 2 ɛ 3. Once selected for mutation, the mutation is performed b flipping the control value. So for example if C 1,4,2 = 1 is selected for mutation, it would become C 1,4,2 = 1. 42

53 3.1.4 Results and Analsis The results in this section were obtained b evolving a population of 1 controllers over 5 generations with the following simulation parameters. Control Grid Resolution: ɛ 1 = ɛ 2 = ɛ 3 = 15 Buffer Zone Angle: β = γ = π/2 Buffer Zone Distance: d B = 1 Terminal Value Fitness Weight: w 1 = 7 Time Reward Fitness Weight: w 2 = 3 In order to better evaluate these results, the analticall optimal Dubins Vehicle solution, solved b L. E. Dubins [1], will be used as a benchmark. This will allow us to see if the evolutionar algorithm is actuall approximating the underling optimal solution Dubins Vehicle Optimal Solution The Dubins optimal solution can be geometricall described as connecting circles tangent to the agent, to circles tangent to the desired location. The radius of these circles is defined b the constant turn rate ρ. There are two different was of connecting these tangent circles; turn-straight-turn (TST) solutions, and turn-turn-turn (TTT) solutions. An example of a TST solution is shown in Figure 3.9. The blue dotted circles are the circles tangent to the agent. The red dashed circles are the circles tangent to the desired state. The thick magenta line shows the trajector taken from the agent starting location to the desired state. TST solutions are used for all initial conditions that have distance to the desired state greater than or equal to 4r min, where r min is the radius of the circle made with the given constant turn rate ρ. This tpe of solution can also appear when the distance to the desired state is 43

54 Turn-Straight-Turn Figure 3.9: Turn-Straight-Turn (TST) Solution Turn-Turn-Turn Figure 3.1: Turn-Turn-Turn (TTT) Solution 44

55 Fitness 1 Evolution of Fitness of Population Max Fitness Mean Fitness Generations Figure 3.11: Fitness Throughout the Evolution less than 4r min, however in this region it is sometimes outperformed b a turn-turn-turn solution. An example of a TTT solution is shown in Figure 3.1. These solutions appear when the distance between the plaers is less than 4r min. As can be seen, in these solutions the agent uses a connecting third circle that is tangent to both the agents s tangent circle and the desired state s tangent circle. Together, these two solution tpes completel define the Dubins Vehicle optimal solution and can be used to find the path of shortest travel time for a turn constrained agent with constant velocit. Analsis of Evolution We will now examine the performance of the evolutionar algorithm. Figure 3.11 shows the maximum and average cumulative fitness in each generation over the 5 generations of evolution. The fitness follows a logarithmic shape, where the majorit of the fitness increase happens from generation 1 to generation 5. Figure 3.12 shows a close up of this region with the number of successful navigations overlaid. One can see that the fitness increases in sharp jumps that largel correspond with increases in the percent of successful 45

56 Fitness Percent of Navigations Ending Successfull 1 Evolution of Fitness of Population G46 G145 G G G1 G16 Max Fitness Mean Fitness Successful Navigations Generations Figure 3.12: Fitness and Success from Gen 1 to 5 navigations. B generation 481, the evolutionar algorithm has developed a controller that is able to successfull navigate to the desired state from all of the tested initial conditions. Figure 3.13 shows the remainder of the evolution (generations 5 to 5). We can see that the fitness still increases even though the rate of successful navigations stas at 1 percent. This continued fitness increase is a result of the time bonus in our utilit function shown in Equation 3.3. This part of the evolution focused on optimizing the trajectories for time while maintaining the 1 percent success rate. The evolutionar algorithm was able to maintain the 1 percent success rate due to the use of elitism. Elitism ensures that the maximum fitness never goes down from one generation to the next. B weighing the terminal value score (w 1 = 7) much higher than the time bonus (w 2 = 3), we created a strong bias against reducing the number of successful navigations from one generation to the next. The marginal fitness improvements from the time bonus were never enough to outweigh the loss of a successful navigation. The onl wa for the evolutionar algorithm to increase the overall fitness, was for it to find controllers that scored a higher time bonus without disturbing the 1 percent capture rate. With these high level fitness trends in 46

57 Fitness Percent of Navigations Ending Successfull Evolution of Fitness of Population G G73 G Max Fitness Mean Fitness Successful Navigations Generations Figure 3.13: Fitness and Success Gen 5 to 5 mind, we can now examine the performance of the evolved controllers at multiple points of interest throughout the evolution. These points are chosen to highlight some fitness leaps in the evolutionar process and have been marked on Figures 3.12 and To begin, we will take a more detailed look at the evolution of successful navigations in generations 1 through 481. Figure 3.14 shows a summar of the successful captures at the marked generations. Similar to Figure 3.7, each small blue circle represents the (x, ) location of five initial conditions (one for each agent heading). If an of these five initial conditions end in a successful navigation, a line extending out from the point in the direction of the initial condition s heading is drawn. The arcs at the end of these lines are a visual aid to help one more easil see what percent of initial conditions end in success from a particular point. When all five initial conditions at a point end successfull, the arcs will form a full circle. Using this information we can see where the successful navigations are concentrated and how the evolved. Starting with Figure 3.14a, one can see that there are onl a few initial conditions 47

58 (a) Generation 1 x (b) Generation 16 x (c) Generation 25 x (d) Generation 46 x (e) Generation 145 x (f) Generation 481 x Figure 3.14: Summar of Successful Captures that end in success and the are concentrated in the region above the desired location. Moving on to Figure 3.14b, we can see that man new successful initial conditions have been added. There are however some instances were an initial condition that was successful in generation 1 is no longer successful in generation 16. While this ma seem like negative behavior, it is in fact a result of our fitness function. The evolutionar algorithm traded a few old successful navigations for a larger number of new successful navigations which resulted in an overall increase in fitness. This trend continues through Figures 3.14c and 3.14d, with the evolutionar algorithm adding man new successful navigations at the cost of onl a few previousl successful initial conditions. B generation 145 there are onl seven initial conditions that do not end successfull. At this point the evolution slows down, and it takes 336 more generations to develop a controller that is able to reach the desired state from all of the tested initial conditions. In order to get a better idea of how the trajectories evolved, we will now examine the 48

59 x x x x (a) Initial Condition 1 (b) Initial Condition 2 (c) Initial Condition 3 (d) Initial Condition 4 Figure 3.15: Generation x x x x (a) Initial Condition 1 (b) Initial Condition 2 (c) Initial Condition 3 (d) Initial Condition 4 Figure 3.16: Generation x x x x (a) Initial Condition 1 (b) Initial Condition 2 (c) Initial Condition 3 (d) Initial Condition 4 Figure 3.17: Generation 145 performance of the controller with the highest cumulative fitness at several of our marked generations. The following set of figures show the trajectories taken from a sample of four initial conditions. The thick blue line shows the path taken b the agent controlled with the controller of highest cumulative fitness in the generation. If it is solid, it indicates a successful navigation. If it is dashed, it indicates an unsuccessful navigation. The thin black dashed line shows the Dubins Vehicle optimal solution. The red drawing in the middle represents the desired state s location, heading, and buffer region behind it. To begin, Figure 3.15 shows the trajector taken b an agent using the controller with 49

60 x x x x (a) Initial Condition 1 (b) Initial Condition 2 (c) Initial Condition 3 (d) Initial Condition 4 Figure 3.18: Generation x x x x (a) Initial Condition 1 (b) Initial Condition 2 (c) Initial Condition 3 (d) Initial Condition 4 Figure 3.19: Generation x x x x (a) Initial Condition 1 (b) Initial Condition 2 (c) Initial Condition 3 (d) Initial Condition 4 Figure 3.2: Generation 5 5

61 highest cumulative fitness in generation 1. As can be seen in Figures 3.15a, 3.15b, and 3.15d, most of the trajectories randoml wander around the space until the either exceed the maximum distance or time runs out. Figure 3.15c however shows a trajector that actuall seems to resemble the Dubins Vehicle optimal solution. Since this is generation 1, no evolution has et taken place and this is simpl the result of a randoml generated control matrix. A few generations later in generation 25, we can see that while three of our sample initial conditions are still unsuccessful, initial condition 3 (Figure 3.16c) has tightened its final turn and is now ver close to the Dubins Vehicle optimal solution. Moving on to generation 145 (Figure 3.17), we can see that all four of our sample initial conditions now result in successful navigation. However just because the end successfull, does not mean that the do so in a time effective manner. For example initial conditions 1 and 4 (Figures 3.17a and 3.17d respectivel) show trajectories that take a ver indirect path. We can see however in initial condition 4, that while it did end up taking a long path, it seems to have narrowl missed the desired state on its initial approach and as a result had to loop around. One small change in the control could allow this initial condition to capture on its initial approach which would greatl improve the fitness. B generation 481, we can see that this small change did indeed happen, and initial condition 4 now captures on its initial approach (Figure 3.18d). In addition to that we can see that the change did not disturb the performance of initial conditions 2 and 3. Looking back to figure 3.14f, we can see that from this point onward, all of the initial conditions end successfull and thus the evolutionar algorithm now improves time performance without degrading the 1% capture rate. In generation 1698, (Figure 3.19) we can see that initial conditions 2, 3, and 4 closel approximate the Dubins Vehicle optimal solution and thus improved their time score. In fact the are overlapping the optimal solution for the majorit of their trajector. However the controller still is unable to match the turn-turn-turn optimal solution from initial 51

62 condition 1. The final set of results in Figure 3.2 show the performance of the best controller in generation 5. This is the best controller evolved b the evolutionar algorithm. As can be seen, in Figure 3.2a, the best controller has developed a turn-turn-turn solution and now ver closel matches the Dubins Vehicle optimal solution from all four of our sample initial conditions. Overall, these results show that the evolutionar algorithm was able to graduall increase fitness through incremental improvements in the controller s performance. The use of elitism and the specific weights in our utilit function allowed the algorithm to begin b focusing on successful navigations and then transition to optimizing the trajectories for time. While we onl examined four sample initial conditions, the same analsis can be done for an of the 125 initial conditions and similar trends will be found. The trajectories begin b randoml moving around the space and graduall improve their performance until the almost completel overlap the Dubins Vehicle optimal solution. Results of Evolutionar Algorithm The end result of the evolutionar algorithm is the controller with highest cumulative fitness in generation 5 (hereb referred to as the best evolved controller). Figure 3.21 shows the trajectories taken b the best evolved controller starting from a sample of five initial conditions. The thick blue lines show the agent s trajector and the thin dashed black lines represent the Dubins Vehicle optimal trajector. We can see that the resulting best controller was able to approximate the Dubins Vehicle optimal solution ver closel, developing both TST and TTT solutions. Figure 3.22 shows a close up of initial condition A. The Dubins Vehicle optimal solution in this case is a TTT solution. The best evolved controller also implements a TTT solution that onl slightl deviates from the optimal trajector. Figure 3.23 shows a close 52

63 Figure 3.21: Performance of Best Controller from Trained IC up of initial condition E. As can be seen, the Dubins Optimal trajector here is a TST solution. Once again, the best controller is able to match this solution tpe with onl slight variations in the trajector. The fact that both TTT and TST solutions emerged from the evolutionar algorithm is important because it shows the abilit of the algorithm to find the small performance differences between the two solutions and appl them in the correct situations. Additionall, as described in section 3.1.2, going straight was not explicitl included in our control space. The agent could onl turn left or right. So in order for evolutionar algorithm to develop controllers that have these straight sections, it had to accuratel place the boundaries between the left and right regions so that it could switch between them as it moved through the state space. As a result of rapidl switching between left and right, the overall trajector appears to be straight. 53

64 8 6 6 A E Figure 3.23: Evolved TST Solution vs Optimal TST Solution Figure : -2 Evolved TTT 2 Solution 4 vs Optimal TTT Solution x x Further Analsis As stated in the introduction, one of the primar advantages of evolving a feedback controller is that it allows our evolved solution to be applied at an admissible initial condition. If the evolutionar algorithm trul approximates the underling Dubins Vehicle optimal controller, it should be able to translate its performance into these new untrained initial conditions. In order to test this, we evolved a feedback controller from 216 initial conditions. We then took the best evolved controller and evaluated it at a set of 1, random untrained initial conditions. These random initial conditions were generated b uniforml selecting a value for ψ, and ψ D in their admissible ranges ( ψ < 2π and ψ D < 2π), and d in a smaller range to avoid starting too close to the maximum distance terminal condition (1 d dmax 2 ). From these tests, we found that the best evolved controller was able to successfull navigate to the desired state from 94.67% of the 1, random initial conditions with an average time to termination of 1.67 times the Dubins Vehicle optimal solution time. Figure 3.24 shows some of the random initial conditions that ended successfull. As 54

65 Figure 3.24: Performance from Untrained IC can-15 be seen, even in these untrained initial conditions, the best controller was able to closel approximate -15 the Dubins -1Vehicle optimal -5 solution. B evolving 5 a parameterized 1 feedback 15 controller with a set of onl 216 initial conditions, x we were able to develop a solution that can be applied to the entire continuum of initial conditions and approximate the true optimal solution 94.67% of the time. The controller parameterization used in the evolutionar algorithm was a 15x15x15 grid, whereas the true Dubins Vehicle optimal solution exists in continuous space. Figure 3.25 shows an illustrative example of a two-dimensional control space. Figure 3.25a shows the true continuous boundar between turn left and turn right. When we discretized it as shown in Figure 3.25b, we can see that the new boundar is quite different from the original one (shown as the dotted line). The grid restricts where the controller can change its control and thus will cause deviations from the true optimal solution. To further illustrate this, Figures 3.26a-c show a slice of the three dimensional control space at distance d = 7 for three different controllers; the true Dubins Vehicle optimal 55

66 (a) True Solution (b) Grid Approximation of Solution Figure 3.25: Example of Grid Parameterization Error controller, the discretized Dubins Vehicle optimal controller, and the best evolved controller. Starting with the true Dubins Vehicle optimal solution (Figure 3.26a), we can see that the boundaries between turn left (ellow) and turn right (blue) are quite complicated with curves and sharp corners. We can discretize this continuous controller and end up with the boundaries shown in Figure 3.26b. We can immediatel see a degradation in the abilit of the controller to accuratel represent these boundaries. Moving on to the best evolved controller (3.26c), we can see that the Dubins Vehicle optimal boundaries are ver poorl represented. There is some resemblance between the two controllers, but in general the control seems to be much more chaoticall distributed. Looking at these controllers, it would seem that the discretized Dubins controller is the best possible 15x15x15 approximation of the true underling solution. Furthermore, it would seem that the evolved solution is overfit to our tested initial conditions and doesn t trul represent the underling solution. To test this we evaluated all three of these controllers from a sample untrained initial condition. Figures 3.26d-f show the corresponding trajectories. From these trajectories, one can see that the discretized Dubins controller is actuall unable to reach the desired state and performs much worse than the evolved con- 56

67 PsiA PsiA PsiA d = 7 d = 7 d = PsiB PsiB (a) Dubins Vehicle 15 Optimal (b) Discretized Dubins Vehicle 15 Solution Optimal Solution PsiB (c) Evolved Controller (d) Dubins Vehicle Optimal (e) Discretized Dubins Vehicle Path Path troller. In fact, when tested from an arra of 1, -15 initial -1 conditions -5 (similarl 5 to our 1 15 x x x -5-1 Figure 3.26: Evolved vs Discretized Dubins (f) Evolved Path evolved controller), this discretized controller was onl able to capture 52.3% of the time with an average time to termination of 5.31 times the Dubins Vehicle optimal solution time. Ideall, the best possible controller would perfectl represent the true optimal boundaries and thus perfectl represent the Dubins Vehicle optimal solution. However, since our grid parameterization prevented the controllers from accuratel representing the boundaries, the evolutionar algorithm developed a new solution that closel resembles the performance of the Dubins vehicle optimal solution without accuratel representing the correct boundaries. This is an important result as it shows the flexibilit of the evolutionar algorithm. Given the limitations of the grid parameterization used, we can see that the evolved controller actuall performed quite well. Even with a simple low resolution grid controller, the evolutionar algorithm was able to develop a solution that approximates the 57

Developing a Tracking Algorithm for Underwater ROV Using Fuzzy Logic Controller

Developing a Tracking Algorithm for Underwater ROV Using Fuzzy Logic Controller 5 th Iranian Conference on Fuzz Sstems Sept. 7-9, 2004, Tehran Developing a Tracking Algorithm for Underwater Using Fuzz Logic Controller M.H. Saghafi 1, H. Kashani 2, N. Mozaani 3, G. R. Vossoughi 4 mh_saghafi@ahoo.com

More information

Approaches to Simulate the Operation of the Bending Machine

Approaches to Simulate the Operation of the Bending Machine Approaches to Simulate the Operation of the Bending Machine Ornwadee Ratanapinunchai Chanunchida Theerasilp Teerut Suksawai Project Advisors: Dr. Krit Chongsrid and Dr. Waree Kongprawechnon Department

More information

Image Metamorphosis By Affine Transformations

Image Metamorphosis By Affine Transformations Image Metamorphosis B Affine Transformations Tim Mers and Peter Spiegel December 16, 2005 Abstract Among the man was to manipulate an image is a technique known as morphing. Image morphing is a special

More information

On the Kinematics of Undulator Girder Motion

On the Kinematics of Undulator Girder Motion LCLS-TN-09-02 On the Kinematics of Undulator Girder Motion J. Welch March 23, 2010 The theor of rigid bod kinematics is used to derive equations that govern the control and measurement of the position

More information

turn counterclockwise from the positive x-axis. However, we could equally well get to this point by a 3 4 turn clockwise, giving (r, θ) = (1, 3π 2

turn counterclockwise from the positive x-axis. However, we could equally well get to this point by a 3 4 turn clockwise, giving (r, θ) = (1, 3π 2 Math 133 Polar Coordinates Stewart 10.3/I,II Points in polar coordinates. The first and greatest achievement of modern mathematics was Descartes description of geometric objects b numbers, using a sstem

More information

MEM380 Applied Autonomous Robots Winter Robot Kinematics

MEM380 Applied Autonomous Robots Winter Robot Kinematics MEM38 Applied Autonomous obots Winter obot Kinematics Coordinate Transformations Motivation Ultimatel, we are interested in the motion of the robot with respect to a global or inertial navigation frame

More information

Final Report: Dynamic Dubins-curve based RRT Motion Planning for Differential Constrain Robot

Final Report: Dynamic Dubins-curve based RRT Motion Planning for Differential Constrain Robot Final Report: Dynamic Dubins-curve based RRT Motion Planning for Differential Constrain Robot Abstract This project develops a sample-based motion-planning algorithm for robot with differential constraints.

More information

Introduction to Homogeneous Transformations & Robot Kinematics

Introduction to Homogeneous Transformations & Robot Kinematics Introduction to Homogeneous Transformations & Robot Kinematics Jennifer Ka Rowan Universit Computer Science Department. Drawing Dimensional Frames in 2 Dimensions We will be working in -D coordinates,

More information

Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control

Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control Stable Trajectory Design for Highly Constrained Environments using Receding Horizon Control Yoshiaki Kuwata and Jonathan P. How Space Systems Laboratory Massachusetts Institute of Technology {kuwata,jhow}@mit.edu

More information

Name Class Date. subtract 3 from each side. w 5z z 5 2 w p - 9 = = 15 + k = 10m. 10. n =

Name Class Date. subtract 3 from each side. w 5z z 5 2 w p - 9 = = 15 + k = 10m. 10. n = Reteaching Solving Equations To solve an equation that contains a variable, find all of the values of the variable that make the equation true. Use the equalit properties of real numbers and inverse operations

More information

Polynomials. Math 4800/6080 Project Course

Polynomials. Math 4800/6080 Project Course Polnomials. Math 4800/6080 Project Course 2. The Plane. Boss, boss, ze plane, ze plane! Tattoo, Fantas Island The points of the plane R 2 are ordered pairs (x, ) of real numbers. We ll also use vector

More information

Module 2, Section 2 Graphs of Trigonometric Functions

Module 2, Section 2 Graphs of Trigonometric Functions Principles of Mathematics Section, Introduction 5 Module, Section Graphs of Trigonometric Functions Introduction You have studied trigonometric ratios since Grade 9 Mathematics. In this module ou will

More information

APPLICATION OF RECIRCULATION NEURAL NETWORK AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION

APPLICATION OF RECIRCULATION NEURAL NETWORK AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION APPLICATION OF RECIRCULATION NEURAL NETWORK AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION Dmitr Brliuk and Valer Starovoitov Institute of Engineering Cbernetics, Laborator of Image Processing and

More information

STRAND G: Relations, Functions and Graphs

STRAND G: Relations, Functions and Graphs UNIT G Using Graphs to Solve Equations: Tet STRAND G: Relations, Functions and Graphs G Using Graphs to Solve Equations Tet Contents * * Section G. Solution of Simultaneous Equations b Graphs G. Graphs

More information

Matrix Representations

Matrix Representations CONDENSED LESSON 6. Matri Representations In this lesson, ou Represent closed sstems with transition diagrams and transition matrices Use matrices to organize information Sandra works at a da-care center.

More information

A New Concept on Automatic Parking of an Electric Vehicle

A New Concept on Automatic Parking of an Electric Vehicle A New Concept on Automatic Parking of an Electric Vehicle C. CAMUS P. COELHO J.C. QUADRADO Instituto Superior de Engenharia de Lisboa Rua Conselheiro Emídio Navarro PORTUGAL Abstract: - A solution to perform

More information

MEAM 520. Mobile Robots

MEAM 520. Mobile Robots MEAM 520 Mobile Robots Katherine J. Kuchenbecker, Ph.D. General Robotics, Automation, Sensing, and Perception Lab (GRASP) MEAM Department, SEAS, Universit of Pennslvania Lecture 22: December 6, 2012 T

More information

Effect of Uncertainties on UCAV Trajectory Optimisation Using Evolutionary Programming

Effect of Uncertainties on UCAV Trajectory Optimisation Using Evolutionary Programming 2007 Information, Decision and Control Effect of Uncertainties on UCAV Trajectory Optimisation Using Evolutionary Programming Istas F Nusyirwan 1, Cees Bil 2 The Sir Lawrence Wackett Centre for Aerospace

More information

2. Find RS and the component form of RS. x. b) θ = 236, v = 35 y. b) 4i 3j c) 7( cos 200 i+ sin 200. a) 2u + v b) w 3v c) u 4v + 2w

2. Find RS and the component form of RS. x. b) θ = 236, v = 35 y. b) 4i 3j c) 7( cos 200 i+ sin 200. a) 2u + v b) w 3v c) u 4v + 2w Pre Calculus Worksheet 6.1 For questions 1-3, let R = ( 5, 2) and S = (2, 8). 1. Sketch the vector RS and the standard position arrow for this vector. 2. Find RS and the component form of RS. 3. Show algebraicall

More information

Slope Fields Introduction / G. TEACHER NOTES MATH NSPIRED. Math Objectives. Vocabulary. About the Lesson. TI-Nspire Navigator System

Slope Fields Introduction / G. TEACHER NOTES MATH NSPIRED. Math Objectives. Vocabulary. About the Lesson. TI-Nspire Navigator System Math Objectives Students will describe the idea behind slope fields in terms of visualization of the famil of solutions to a differential equation. Students will describe the slope of a tangent line at

More information

LINEAR PROGRAMMING. Straight line graphs LESSON

LINEAR PROGRAMMING. Straight line graphs LESSON LINEAR PROGRAMMING Traditionall we appl our knowledge of Linear Programming to help us solve real world problems (which is referred to as modelling). Linear Programming is often linked to the field of

More information

Math 26: Fall (part 1) The Unit Circle: Cosine and Sine (Evaluating Cosine and Sine, and The Pythagorean Identity)

Math 26: Fall (part 1) The Unit Circle: Cosine and Sine (Evaluating Cosine and Sine, and The Pythagorean Identity) Math : Fall 0 0. (part ) The Unit Circle: Cosine and Sine (Evaluating Cosine and Sine, and The Pthagorean Identit) Cosine and Sine Angle θ standard position, P denotes point where the terminal side of

More information

2.2 Absolute Value Functions

2.2 Absolute Value Functions . Absolute Value Functions 7. Absolute Value Functions There are a few was to describe what is meant b the absolute value of a real number. You ma have been taught that is the distance from the real number

More information

Linear Algebra and Image Processing: Additional Theory regarding Computer Graphics and Image Processing not covered by David C.

Linear Algebra and Image Processing: Additional Theory regarding Computer Graphics and Image Processing not covered by David C. Linear Algebra and Image Processing: Additional Theor regarding Computer Graphics and Image Processing not covered b David C. La Dr. D.P. Huijsmans LIACS, Leiden Universit Februar 202 Differences in conventions

More information

Section 2.2: Absolute Value Functions, from College Algebra: Corrected Edition by Carl Stitz, Ph.D. and Jeff Zeager, Ph.D. is available under a

Section 2.2: Absolute Value Functions, from College Algebra: Corrected Edition by Carl Stitz, Ph.D. and Jeff Zeager, Ph.D. is available under a Section.: Absolute Value Functions, from College Algebra: Corrected Edition b Carl Stitz, Ph.D. and Jeff Zeager, Ph.D. is available under a Creative Commons Attribution-NonCommercial-ShareAlike.0 license.

More information

The Graph of an Equation

The Graph of an Equation 60_0P0.qd //0 :6 PM Page CHAPTER P Preparation for Calculus Archive Photos Section P. RENÉ DESCARTES (96 60) Descartes made man contributions to philosoph, science, and mathematics. The idea of representing

More information

Centre for Autonomous Systems

Centre for Autonomous Systems Robot Henrik I Centre for Autonomous Systems Kungl Tekniska Högskolan hic@kth.se 27th April 2005 Outline 1 duction 2 Kinematic and Constraints 3 Mobile Robot 4 Mobile Robot 5 Beyond Basic 6 Kinematic 7

More information

EXPANDING THE CALCULUS HORIZON. Robotics

EXPANDING THE CALCULUS HORIZON. Robotics EXPANDING THE CALCULUS HORIZON Robotics Robin designs and sells room dividers to defra college epenses. She is soon overwhelmed with orders and decides to build a robot to spra paint her dividers. As in

More information

Discussion: Clustering Random Curves Under Spatial Dependence

Discussion: Clustering Random Curves Under Spatial Dependence Discussion: Clustering Random Curves Under Spatial Dependence Gareth M. James, Wenguang Sun and Xinghao Qiao Abstract We discuss the advantages and disadvantages of a functional approach to clustering

More information

Algorithms and System for High-Level Structure Analysis and Event Detection in Soccer Video

Algorithms and System for High-Level Structure Analysis and Event Detection in Soccer Video Algorithms and Sstem for High-Level Structure Analsis and Event Detection in Soccer Video Peng Xu, Shih-Fu Chang, Columbia Universit Aja Divakaran, Anthon Vetro, Huifang Sun, Mitsubishi Electric Advanced

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Tommie J. Liddy and Tien-Fu Lu School of Mechanical Engineering; The University

More information

Table of Contents. Unit 5: Trigonometric Functions. Answer Key...AK-1. Introduction... v

Table of Contents. Unit 5: Trigonometric Functions. Answer Key...AK-1. Introduction... v These materials ma not be reproduced for an purpose. The reproduction of an part for an entire school or school sstem is strictl prohibited. No part of this publication ma be transmitted, stored, or recorded

More information

GRAPHICS OUTPUT PRIMITIVES

GRAPHICS OUTPUT PRIMITIVES CHAPTER 3 GRAPHICS OUTPUT PRIMITIVES LINE DRAWING ALGORITHMS DDA Line Algorithm Bresenham Line Algorithm Midpoint Circle Algorithm Midpoint Ellipse Algorithm CG - Chapter-3 LINE DRAWING Line drawing is

More information

Quad-Tree Based Geometric-Adapted Cartesian Grid Generation

Quad-Tree Based Geometric-Adapted Cartesian Grid Generation Quad-Tree Based Geometric-Adapted Cartesian Grid Generation EMRE KARA1, AHMET ĐHSAN KUTLAR1, MEHMET HALUK AKSEL 1Department of Mechanical Engineering Universit of Gaziantep 7310 Gaziantep TURKEY Mechanical

More information

Diversity visualization in evolutionary algorithms

Diversity visualization in evolutionary algorithms Diversit visualization in evolutionar algorithms Jan Drchal drchaj@fel.cvut.cz Miroslav Šnorek snorek@fel.cvut.cz Abstract: Evolutionar Algorithms (EAs) are well-known nature-inspired optimization methods.

More information

y = f(x) x (x, f(x)) f(x) g(x) = f(x) + 2 (x, g(x)) 0 (0, 1) 1 3 (0, 3) 2 (2, 3) 3 5 (2, 5) 4 (4, 3) 3 5 (4, 5) 5 (5, 5) 5 7 (5, 7)

y = f(x) x (x, f(x)) f(x) g(x) = f(x) + 2 (x, g(x)) 0 (0, 1) 1 3 (0, 3) 2 (2, 3) 3 5 (2, 5) 4 (4, 3) 3 5 (4, 5) 5 (5, 5) 5 7 (5, 7) 0 Relations and Functions.7 Transformations In this section, we stud how the graphs of functions change, or transform, when certain specialized modifications are made to their formulas. The transformations

More information

Introduction to Homogeneous Transformations & Robot Kinematics

Introduction to Homogeneous Transformations & Robot Kinematics Introduction to Homogeneous Transformations & Robot Kinematics Jennifer Ka, Rowan Universit Computer Science Department Januar 25. Drawing Dimensional Frames in 2 Dimensions We will be working in -D coordinates,

More information

Lab #4: 2-Dimensional Kinematics. Projectile Motion

Lab #4: 2-Dimensional Kinematics. Projectile Motion Lab #4: -Dimensional Kinematics Projectile Motion A medieval trebuchet b Kolderer, c1507 http://members.iinet.net.au/~rmine/ht/ht0.html#5 Introduction: In medieval das, people had a ver practical knowledge

More information

UNIT P1: PURE MATHEMATICS 1 QUADRATICS

UNIT P1: PURE MATHEMATICS 1 QUADRATICS QUADRATICS Candidates should able to: carr out the process of completing the square for a quadratic polnomial, and use this form, e.g. to locate the vertex of the graph of or to sketch the graph; find

More information

Curve Subdivision in SE(2)

Curve Subdivision in SE(2) Curve Subdivision in SE(2) Jan Hakenberg, ETH Zürich 2018-07-26 Figure: A point in the special Euclidean group SE(2) consists of a position in the plane and a heading. The figure shows two rounds of cubic

More information

A rigid body free to move in a reference frame will, in the general case, have complex motion, which is simultaneously a combination of rotation and

A rigid body free to move in a reference frame will, in the general case, have complex motion, which is simultaneously a combination of rotation and 050389 - Analtical Elements of Mechanisms Introduction. Degrees of Freedom he number of degrees of freedom (DOF) of a sstem is equal to the number of independent parameters (measurements) that are needed

More information

PARAMETRIC EQUATIONS AND POLAR COORDINATES

PARAMETRIC EQUATIONS AND POLAR COORDINATES 9 ARAMETRIC EQUATIONS AND OLAR COORDINATES So far we have described plane curves b giving as a function of f or as a function of t or b giving a relation between and that defines implicitl as a function

More information

Dept. of Computing Science & Math

Dept. of Computing Science & Math Lecture 4: Multi-Laer Perceptrons 1 Revie of Gradient Descent Learning 1. The purpose of neural netor training is to minimize the output errors on a particular set of training data b adusting the netor

More information

Algebra I. Linear Equations. Slide 1 / 267 Slide 2 / 267. Slide 3 / 267. Slide 3 (Answer) / 267. Slide 4 / 267. Slide 5 / 267

Algebra I. Linear Equations. Slide 1 / 267 Slide 2 / 267. Slide 3 / 267. Slide 3 (Answer) / 267. Slide 4 / 267. Slide 5 / 267 Slide / 67 Slide / 67 lgebra I Graphing Linear Equations -- www.njctl.org Slide / 67 Table of ontents Slide () / 67 Table of ontents Linear Equations lick on the topic to go to that section Linear Equations

More information

Grid and Mesh Generation. Introduction to its Concepts and Methods

Grid and Mesh Generation. Introduction to its Concepts and Methods Grid and Mesh Generation Introduction to its Concepts and Methods Elements in a CFD software sstem Introduction What is a grid? The arrangement of the discrete points throughout the flow field is simpl

More information

A Genetic Algorithm for Mid-Air Target Interception

A Genetic Algorithm for Mid-Air Target Interception olume 14 No.1, January 011 A Genetic Algorithm for Mid-Air Target Interception Irfan Younas HITEC University Taxila cantt. Pakistan Atif Aqeel PMAS-AAUR Rawalpindi Pakistan ABSTRACT This paper presents

More information

Optimal Path Finding for Direction, Location and Time Dependent Costs, with Application to Vessel Routing

Optimal Path Finding for Direction, Location and Time Dependent Costs, with Application to Vessel Routing 1 Optimal Path Finding for Direction, Location and Time Dependent Costs, with Application to Vessel Routing Irina S. Dolinskaya Department of Industrial Engineering and Management Sciences Northwestern

More information

A Full Analytical Solution to the Direct and Inverse Kinematics of the Pentaxis Robot Manipulator

A Full Analytical Solution to the Direct and Inverse Kinematics of the Pentaxis Robot Manipulator A Full Analtical Solution to the Direct and Inverse Kinematics of the Pentais Robot Manipulator Moisés Estrada Castañeda, Luis Tupak Aguilar Bustos, Luis A. Gonále Hernánde Instituto Politécnico Nacional

More information

NATIONAL UNIVERSITY OF SINGAPORE. (Semester I: 1999/2000) EE4304/ME ROBOTICS. October/November Time Allowed: 2 Hours

NATIONAL UNIVERSITY OF SINGAPORE. (Semester I: 1999/2000) EE4304/ME ROBOTICS. October/November Time Allowed: 2 Hours NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION FOR THE DEGREE OF B.ENG. (Semester I: 1999/000) EE4304/ME445 - ROBOTICS October/November 1999 - Time Allowed: Hours INSTRUCTIONS TO CANDIDATES: 1. This paper

More information

2.8 Distance and Midpoint Formulas; Circles

2.8 Distance and Midpoint Formulas; Circles Section.8 Distance and Midpoint Formulas; Circles 9 Eercises 89 90 are based on the following cartoon. B.C. b permission of Johnn Hart and Creators Sndicate, Inc. 89. Assuming that there is no such thing

More information

Singularity Loci of Planar Parallel Manipulators with Revolute Joints

Singularity Loci of Planar Parallel Manipulators with Revolute Joints Singularity Loci of Planar Parallel Manipulators with Revolute Joints ILIAN A. BONEV AND CLÉMENT M. GOSSELIN Département de Génie Mécanique Université Laval Québec, Québec, Canada, G1K 7P4 Tel: (418) 656-3474,

More information

A Decomposition Approach to Multi-vehicle Cooperative Control

A Decomposition Approach to Multi-vehicle Cooperative Control SUBMITTED TO THE IEEE TRANSACTIONS ON ROBOTICS 1 A Decomposition Approach to Multi-vehicle Cooperative Control Matthew G. Earl and Raffaello D Andrea Abstract We present methods to synthesize cooperative

More information

Glossary alternate interior angles absolute value function Example alternate exterior angles Example angle of rotation Example

Glossary alternate interior angles absolute value function Example alternate exterior angles Example angle of rotation Example Glossar A absolute value function An absolute value function is a function that can be written in the form, where is an number or epression. alternate eterior angles alternate interior angles Alternate

More information

Partial Fraction Decomposition

Partial Fraction Decomposition Section 7. Partial Fractions 53 Partial Fraction Decomposition Algebraic techniques for determining the constants in the numerators of partial fractions are demonstrated in the eamples that follow. Note

More information

Algebra I Notes Linear Functions & Inequalities Part I Unit 5 UNIT 5 LINEAR FUNCTIONS AND LINEAR INEQUALITIES IN TWO VARIABLES

Algebra I Notes Linear Functions & Inequalities Part I Unit 5 UNIT 5 LINEAR FUNCTIONS AND LINEAR INEQUALITIES IN TWO VARIABLES UNIT LINEAR FUNCTIONS AND LINEAR INEQUALITIES IN TWO VARIABLES PREREQUISITE SKILLS: students must know how to graph points on the coordinate plane students must understand ratios, rates and unit rate VOCABULARY:

More information

Numerical Solution of Optimal Control Problems Using B-Splines

Numerical Solution of Optimal Control Problems Using B-Splines Numerical Solution of Optimal Control Problems Using B-Splines Tamer Inanc Raktim Bhattachara Control and Dnamical Sstems Mail Code 17-81 California Institute of Technolog Pasadena, CA 9115. September

More information

The Chase Problem (Part 1) David C. Arney

The Chase Problem (Part 1) David C. Arney The Chase Problem (Part 1) David C. Arney We build systems like the Wright brothers built airplanes build the whole thing, push it off a cliff, let it crash, and start all over again. --- R. M. Graham

More information

CS 157: Assignment 6

CS 157: Assignment 6 CS 7: Assignment Douglas R. Lanman 8 Ma Problem : Evaluating Conve Polgons This write-up presents several simple algorithms for determining whether a given set of twodimensional points defines a conve

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Perspective Projection Transformation

Perspective Projection Transformation Perspective Projection Transformation Where does a point of a scene appear in an image?? p p Transformation in 3 steps:. scene coordinates => camera coordinates. projection of camera coordinates into image

More information

Modelling Niches of Arbitrary Shape in Genetic Algorithms using Niche Linkage in the Dynamic Niche Clustering Framework

Modelling Niches of Arbitrary Shape in Genetic Algorithms using Niche Linkage in the Dynamic Niche Clustering Framework Modelling Niches of Arbitrar Shape in Genetic Algorithms using Niche Linkage in the Dnamic Niche Clustering Framework Justin Gan & Kevin Warwick Cbernetic Intelligence Research Group Department of Cbernetics

More information

Double Integrals in Polar Coordinates

Double Integrals in Polar Coordinates Double Integrals in Polar Coordinates. A flat plate is in the shape of the region in the first quadrant ling between the circles + and +. The densit of the plate at point, is + kilograms per square meter

More information

Motion Control (wheeled robots)

Motion Control (wheeled robots) Motion Control (wheeled robots) Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground Definition of required motion -> speed control,

More information

A Formal Definition of Limit

A Formal Definition of Limit 5 CHAPTER Limits and Their Properties L + ε L L ε (c, L) c + δ c c δ The - definition of the it of f as approaches c Figure. A Formal Definition of Limit Let s take another look at the informal description

More information

9. p(x) = x 3 8x 2 5x p(x) = x 3 + 3x 2 33x p(x) = x x p(x) = x 3 + 5x x p(x) = x 4 50x

9. p(x) = x 3 8x 2 5x p(x) = x 3 + 3x 2 33x p(x) = x x p(x) = x 3 + 5x x p(x) = x 4 50x Section 6.3 Etrema and Models 593 6.3 Eercises In Eercises 1-8, perform each of the following tasks for the given polnomial. i. Without the aid of a calculator, use an algebraic technique to identif the

More information

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3

Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Unit 2: Locomotion Kinematics of Wheeled Robots: Part 3 Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 28, 2014 COMP 4766/6778 (MUN) Kinematics of

More information

Vehicle s Kinematics Measurement with IMU

Vehicle s Kinematics Measurement with IMU 536441 Vehicle dnamics and control laborator Vehicle s Kinematics Measurement with IMU This laborator is design to introduce ou to understand and acquire the inertia properties for using in the vehicle

More information

PRT simulation research

PRT simulation research THE ARCHIVES OF TRANSPORT VOL. XXVII-XXVIII NO 3-4 13 PRT simulation research Maciej Kozłowski Włodzimierz Choromański ** Received November 13 Abstract The paper presents analses results of PRT vehicle

More information

Global Optimization with MATLAB Products

Global Optimization with MATLAB Products Global Optimization with MATLAB Products Account Manager 이장원차장 Application Engineer 엄준상 The MathWorks, Inc. Agenda Introduction to Global Optimization Peaks Surve of Solvers with Eamples 8 MultiStart 6

More information

Exploiting Rolling Shutter Distortions for Simultaneous Object Pose and Velocity Computation Using a Single View

Exploiting Rolling Shutter Distortions for Simultaneous Object Pose and Velocity Computation Using a Single View Eploiting Rolling Shutter Distortions for Simultaneous Object Pose and Velocit Computation Using a Single View Omar Ait-Aider, Nicolas Andreff, Jean Marc Lavest and Philippe Martinet Blaise Pascal Universit

More information

Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ x + 5y + 7z 9x + 3y + 11z

Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ x + 5y + 7z 9x + 3y + 11z Basic Linear Algebra Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ 1 5 ] 7 9 3 11 Often matrices are used to describe in a simpler way a series of linear equations.

More information

4.6 Graphs of Other Trigonometric Functions

4.6 Graphs of Other Trigonometric Functions .6 Graphs of Other Trigonometric Functions Section.6 Graphs of Other Trigonometric Functions 09 Graph of the Tangent Function Recall that the tangent function is odd. That is, tan tan. Consequentl, the

More information

Cooperative Task Planning of Multi-Robot Systems with Temporal Constraints 1

Cooperative Task Planning of Multi-Robot Systems with Temporal Constraints 1 Paper Number: ICRA-2003 Cooperative Task Planning of Multi-Robot Systems with Temporal Constraints 1 Feng-Li Lian a and Richard Murray b (a) Electrical Engineering, National Taiwan University (b) Control

More information

Optional: Building a processor from scratch

Optional: Building a processor from scratch Optional: Building a processor from scratch In this assignment we are going build a computer processor from the ground up, starting with transistors, and ending with a small but powerful processor. The

More information

Ian Mitchell. Department of Computer Science The University of British Columbia

Ian Mitchell. Department of Computer Science The University of British Columbia CPSC 542D: Level Set Methods Dynamic Implicit Surfaces and the Hamilton-Jacobi Equation or What Water Simulation, Robot Path Planning and Aircraft Collision Avoidance Have in Common Ian Mitchell Department

More information

Chapter 5: Polynomial Functions

Chapter 5: Polynomial Functions Chapter : Polnomial Functions Section.1 Chapter : Polnomial Functions Section.1: Eploring the Graphs of Polnomial Functions Terminolog: Polnomial Function: A function that contains onl the operations of

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

Exponential and Logarithmic Functions

Exponential and Logarithmic Functions Eponential and Logarithmic Functions Figure Electron micrograph of E. Coli bacteria (credit: Mattosaurus, Wikimedia Commons) CHAPTER OUTLINE. Eponential Functions. Logarithmic Properties. Graphs of Eponential

More information

Basic commands using the "Insert" menu: To insert a two-dimensional (2D) graph, use: To insert a three-dimensional (3D) graph, use: Insert > Plot > 3D

Basic commands using the Insert menu: To insert a two-dimensional (2D) graph, use: To insert a three-dimensional (3D) graph, use: Insert > Plot > 3D Oct 7::3 - GraphsBasics5_ForPrinting.sm Eamples of two- and three-dimensional graphics in Smath Studio --------------------------------------------------------------- B Gilberto E. Urro, October Basic

More information

Connecting Algebra and Geometry with Polygons

Connecting Algebra and Geometry with Polygons Connecting Algebra and Geometr with Polgons 15 Circles are reall important! Once ou know our wa around a circle, ou can use this knowledge to figure out a lot of other things! 15.1 Name That Triangle!

More information

Exponential and Logarithmic Functions

Exponential and Logarithmic Functions Eponential and Logarithmic Functions Figure Electron micrograph of E. Coli bacteria (credit: Mattosaurus, Wikimedia Commons) Chapter Outline. Eponential Functions. Logarithmic Properties. Graphs of Eponential

More information

3.2 Polynomial Functions of Higher Degree

3.2 Polynomial Functions of Higher Degree 71_00.qp 1/7/06 1: PM Page 6 Section. Polnomial Functions of Higher Degree 6. Polnomial Functions of Higher Degree What ou should learn Graphs of Polnomial Functions You should be able to sketch accurate

More information

E V ER-growing global competition forces. Accuracy Analysis and Improvement for Direct Laser Sintering

E V ER-growing global competition forces. Accuracy Analysis and Improvement for Direct Laser Sintering Accurac Analsis and Improvement for Direct Laser Sintering Y. Tang 1, H. T. Loh 12, J. Y. H. Fuh 2, Y. S. Wong 2, L. Lu 2, Y. Ning 2, X. Wang 2 1 Singapore-MIT Alliance, National Universit of Singapore

More information

CS F-07 Objects in 2D 1

CS F-07 Objects in 2D 1 CS420-2010F-07 Objects in 2D 1 07-0: Representing Polgons We want to represent a simple polgon Triangle, rectangle, square, etc Assume for the moment our game onl uses these simple shapes No curves for

More information

Tracking of Dynamic Objects Based on Optical Flow

Tracking of Dynamic Objects Based on Optical Flow Tracking of Dnamic Objects Based on Optical Flow Torsten Radtke, Volker Zerbe Facult of Informatics and Automation Ilmenau Technical Universit P.O.Bo 10 05 65, 98684 Ilmenau German Abstract In this paper

More information

Apprenticeship Learning for Reinforcement Learning. with application to RC helicopter flight Ritwik Anand, Nick Haliday, Audrey Huang

Apprenticeship Learning for Reinforcement Learning. with application to RC helicopter flight Ritwik Anand, Nick Haliday, Audrey Huang Apprenticeship Learning for Reinforcement Learning with application to RC helicopter flight Ritwik Anand, Nick Haliday, Audrey Huang Table of Contents Introduction Theory Autonomous helicopter control

More information

LESSON 3.1 INTRODUCTION TO GRAPHING

LESSON 3.1 INTRODUCTION TO GRAPHING LESSON 3.1 INTRODUCTION TO GRAPHING LESSON 3.1 INTRODUCTION TO GRAPHING 137 OVERVIEW Here s what ou ll learn in this lesson: Plotting Points a. The -plane b. The -ais and -ais c. The origin d. Ordered

More information

SECTION 6-8 Graphing More General Tangent, Cotangent, Secant, and Cosecant Functions

SECTION 6-8 Graphing More General Tangent, Cotangent, Secant, and Cosecant Functions 6-8 Graphing More General Tangent, Cotangent, Secant, and Cosecant Functions 9 duce a scatter plot in the viewing window. Choose 8 for the viewing window. (B) It appears that a sine curve of the form k

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Implicit differentiation

Implicit differentiation Roberto s Notes on Differential Calculus Chapter 4: Basic differentiation rules Section 5 Implicit differentiation What ou need to know alread: Basic rules of differentiation, including the chain rule.

More information

S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT

S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT S-SHAPED ONE TRAIL PARALLEL PARKING OF A CAR-LIKE MOBILE ROBOT 1 SOE YU MAUNG MAUNG, 2 NU NU WIN, 3 MYINT HTAY 1,2,3 Mechatronic Engineering Department, Mandalay Technological University, The Republic

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

Using Machine Learning to Optimize Storage Systems

Using Machine Learning to Optimize Storage Systems Using Machine Learning to Optimize Storage Systems Dr. Kiran Gunnam 1 Outline 1. Overview 2. Building Flash Models using Logistic Regression. 3. Storage Object classification 4. Storage Allocation recommendation

More information

CSC Computer Graphics

CSC Computer Graphics 7//7 CSC. Computer Graphics Lecture Kasun@dscs.sjp.ac.l Department of Computer Science Universit of Sri Jaewardanepura Line drawing algorithms DDA Midpoint (Bresenham s) Algorithm Circle drawing algorithms

More information

Two Dimensional Viewing

Two Dimensional Viewing Two Dimensional Viewing Dr. S.M. Malaek Assistant: M. Younesi Two Dimensional Viewing Basic Interactive Programming Basic Interactive Programming User controls contents, structure, and appearance of objects

More information

3/1/2016. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder

3/1/2016. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder 1 Intermediate Microeconomics W3211 Lecture 5: Choice and Demand Introduction Columbia Universit, Spring 2016 Mark Dean: mark.dean@columbia.edu 2 The Stor So Far. 3 Toda s Aims 4 We have now have had a

More information

Implicit Differentiation - the basics

Implicit Differentiation - the basics x x 6 Implicit Differentiation - the basics Implicit differentiation is the name for the method of differentiation that we use when we have not explicitl solved for in terms of x (that means we did not

More information

2.3 Polynomial Functions of Higher Degree with Modeling

2.3 Polynomial Functions of Higher Degree with Modeling SECTION 2.3 Polnomial Functions of Higher Degree with Modeling 185 2.3 Polnomial Functions of Higher Degree with Modeling What ou ll learn about Graphs of Polnomial Functions End Behavior of Polnomial

More information

CURVES OF CONSTANT WIDTH AND THEIR SHADOWS. Have you ever wondered why a manhole cover is in the shape of a circle? This

CURVES OF CONSTANT WIDTH AND THEIR SHADOWS. Have you ever wondered why a manhole cover is in the shape of a circle? This CURVES OF CONSTANT WIDTH AND THEIR SHADOWS LUCIE PACIOTTI Abstract. In this paper we will investigate curves of constant width and the shadows that they cast. We will compute shadow functions for the circle,

More information