PARALLELIZATION OF THE NELDER-MEAD SIMPLEX ALGORITHM

Size: px
Start display at page:

Download "PARALLELIZATION OF THE NELDER-MEAD SIMPLEX ALGORITHM"

Transcription

1 PARALLELIZATION OF THE NELDER-MEAD SIMPLEX ALGORITHM Scott Wu Montgomery Blair High School Silver Spring, Maryland Paul Kienzle Center for Neutron Research, National Institute of Standards and Technology Gaithersburg, Maryland

2 ABSTRACT The Nelder-Mead Simplex algorithm, proposed by John Nelder and Roger Mead, is an iterative multidimensional, downhill search algorithm which uses a simplex to perform a search over a function. It is frequently used in numerical optimization because of its simplicity and effectiveness, but has room for improvement. As a mainly serial algorithm, the Nelder-Mead Simplex algorithm fails to take full advantage of the parallel processing capabilities available among modern computers. In order to improve the performance of the Nelder-Mead Simplex algorithm, a parallel version of the algorithm, described in Lee and Wiswall (2007), was implemented, allowing it to effectively utilize parallel processing. Additionally, another parallel variation of the algorithm was written by extending the number of simplex vertices. Each version of the algorithm was tested with three optimization problems: minimizing the quadratic function, minimizing the Rosenbrock function and fitting a polynomial. The results of these tests indicated linear speedup in the parallel algorithm when it was run in a parallel environment. Even without use of parallel processing however, the algorithm still showed increased robustness, successfully converging more often, at the cost of a slight increase in work. The parallel Nelder-Mead Simplex algorithm can be used for faster and more efficient optimization by utilizing parallel processing techniques, or by employing its increased robustness. 1

3 INTRODUCTION Optimization is a very important technique in everyday life. Businesses want to optimize the difference between profits and production costs. Engineers want to optimize the performance of their products. Researchers want to optimize data to fit models. There exist a variety of optimization algorithms with different purposes; some can find multiple solutions to a problem, others find solutions extremely quickly. However, for more complex problems, optimization algorithms require longer run times to find a solution, or perhaps fail to find one at all. The purpose of this research is to increase the performance of one common optimization algorithm, the Nelder-Mead Simplex algorithm, through parallelization, and compare its performance to existing algorithms. The Nelder-Mead Simplex algorithm, developed by John Nelder and Roger Mead in 1965, is a quick and simple multidimensional optimization algorithm. Multidimensionality is important for fitting models, which may contain dozens of unknown coefficients. Given the general form and calculated test points of a model, the Nelder-Mead Simplex algorithm can find specific coefficients by optimizing the difference between the estimated model and the points. The Nelder-Mead Simplex algorithm uses a simplex to perform a downhill search on a function. A simplex in n dimensions is a figure with n + 1 vertices such that all vertices cannot be contained within any lower dimension. To traverse the function, each point is sorted by its value, equal to the function evaluation at that point. The worst point, the point with the highest value, is transformed about the centroid of the remaining points, producing a new point. Transformations, as shown in Figure 1.1, include the reflection, expansion, outer contraction and inner contraction points. This constitutes one iteration of the algorithm. The algorithm is repeated until the simplex converges onto a single point. 2

4 Figure 1.1 Each point on the line containing the worst vertex and the centroid are different possible points for transforming the worst vertex. The centroid is not a transformation point because otherwise the simplex might lose a dimension of freedom. Parallel processing environments are now common in modern technology, allowing multiple computations to be performed simultaneously. The original algorithm, however, is mainly serial. Parallelizing the algorithm can improve the overall speed of the algorithm linearly with the number of parallel processors available. The Nelder-Mead algorithm can be parallelized by taking k worst vertices to transform rather than just a single worst vertex. Each of the vertices can be transformed simultaneously, one on each parallel process. Consequently, the result will return in the same time it takes a single vertex to be computed. Three versions of the algorithm were used in this research: the original Serial Simplex algorithm, the Parallel Simplex algorithm, and the Extended Simplex algorithm. The algorithms were also run on three different environments: a single core computer, a multi-core computer and a computer cluster. Each algorithm was tested with various problems, ranging from simple to complex. The results indicated that both the parallel and extended versions of the algorithm showed a linear speedup in number of iterations needed up to a certain point. Wall clock time followed this linear speedup up until all the possible parallel processes were used. The results of this research can be used to solve optimization problems quicker and more efficiently. 3

5 MATERIALS AND METHODS Hardware: The hardware used in this research consisted of 27 computers, one gigabit Ethernet switch, and a set of proper computer accessories for each computer. Computer accessories include one power supply and one Ethernet cable. One computer, called the head node, was a regular computer with a hard disk drive. All other computers, called worker nodes, had dual core processors, 2 GB of RAM and no hard disk drive. These "diskless" computers were placed on a computer rack for centralized cooling and organization. Connecting all the computers together formed the computer cluster used to run programs in a parallel environment. Software: The software used in this research was divided into those needed to setup the computer cluster and those needed to implement the Nelder-Mead Simplex algorithm. Ubuntu was the primary operating system used in the computer cluster. Additional packages were installed on the head node to configure the cluster. A Dynamic Host Configuration Protocol (DHCP) server established the head node to act as a router, creating an internal network solely between it and the connected worker nodes. A Pre-boot Execution Environment Boot (PXE Boot) server provides instructions for the worker nodes on startup through the network, allowing them to download and boot to a custom built Ubuntu kernel through a Trivial File Transfer Protocol (TFTP) server. A Network File System (NFS) server allowed worker nodes to access a file system mounted over the network on the head node. Ganglia allowed computers on the cluster to be monitored through a web interface. 4

6 The Nelder-Mead Simplex algorithms were programmed in Python. Python includes the standard Python library as well as additional libraries such as NumPy, SciPy, MatPlotLib and PyLab. Open Message Passing Interface (OpenMPI) and its Python wrapper, MPI4py, were also used to communicate between worker nodes within the program. Bayesian Uncertainty Modeling for Parametric Systems (BUMPS) is a fitting engine written in Python, and contains a modified version of the Serial Nelder-Mead Simplex algorithm found in the SciPy library, as well as Differential Evolution, another optimization algorithm. Setting up the Cluster: The computer cluster provided the parallel environment needed to run parallel programs. To setup the computer cluster, all computers were attached to the gigabit switch using Ethernet cables. The switch was attached to the head node, which ran a DHCP server to configure and manage the cluster network. A custom Ubuntu kernel was built to minimize size while including all necessary components, such as network drivers, a temporary file system stored in memory, and an NFS mounted from the head node. Because the worker nodes were diskless, this kernel needed to be sent on startup in order for the worker nodes to boot. In order to run OpenMPI on the cluster, all worker nodes require a password-less remote shell connection. Adding a generated SSH key to an account on the NFS allowed remote connections to log in without a password. This enabled OpenMPI to log in to each worker node and run programs without user input. Because the worker nodes were located on an isolated network, security was not a concern in enabling this feature. 5

7 Programming the Algorithm: Using the implementation in BUMPS and the original paper Nelder and Mead (1965) as references, a standalone version of the original Nelder-Mead Simplex algorithm was created in Python. The algorithm's parameters included the target function to optimize, various starting and stopping criteria, and returned a set of results containing the solution and performance statistics. Modifying the Algorithm: Multiple branches of this program were derived from the original algorithm. The main modification was the parallel implementation of the algorithm, described in Lee and Wiswall (2007). The Parallel Simplex algorithm, accepted an additional argument, which specified the degree of parallelism. In the original algorithm, only the worst vertex in the simplex is transformed. The degree of parallelism specifies multiple vertices to transform. For example, with a degree of 2, each iteration transforms the two worst vertices. A degree of 1 is the same as the Serial Simplex algorithm. The transformation of these worst vertices can be parallelized since transformations are independent of each other. The Extended Simplex algorithm is an extension of the Parallel Simplex algorithm. Instead of using the traditional definition of a simplex, the Extended Simplex algorithm uses additional vertices. The number of additional vertices ranges anywhere from 1 to the number of dimensions. For example, in two dimensions, an extended simplex could be a square instead of a triangle, or, in three dimensions, a cube instead of a tetrahedron. At one additional vertex, the Extended Simplex algorithm is equivalent to the Parallel Simplex algorithm. Another modification to the Parallel Simplex algorithm utilized a mapper to perform function evaluations in parallel, either using Python's multiprocessing library or MPI. Given a 6

8 collection of points to transform, the mapper assigns each point to a processor or worker node. If there are more points than processors, the mapper waits until one processor finishes and then sends it the next point to transform. Test Problems: Three test problems were used to measure the performance of the algorithms. The Quadratic function and the Rosenbrock function are unimodal and multidimensional functions that are optimized by finding their minimums. The Polynomial Fitting problem attempts to fit a certain degree polynomial given a generated set of points. The function itself takes polynomial coefficients and returns the sum of the residuals squared at each of the points. The Quadratic function, defined by the equation in Figure 2.1, is a simple bowl whose minimum is at the origin. Because all sides point downhill, convergence is both quick and trivial. N f(x 1, x 2,, x N ) = x i 2 i=1 Figure 2.1 The Quadratic function in three dimensional space and its equation for a multidimensional point. 7

9 The Rosenbrock function, defined by the equation in Figure 2.2, is a more difficult function to optimize, due to the hill and curved valley. Its minimum lies at the point (1, 1,, 1). N 1 f(x 1, x 2,, x N ) = (1 x i ) (x i+1 x i 2 ) 2 i=1 Figure 2.2 The Rosenbrock function in three dimensional space and its equation for a multidimensional point. The Polynomial Fitting problem is an example of fitting data by minimizing the deviation of given points from a predicted function. First, an N - 1 degree polynomial is created with random coefficients. Next, N points are generated by taking the function evaluation of the polynomial at different points. The optimization algorithm is able to access only these given points and not the original polynomial. Because N points define a unique polynomial of degree N 1, there exists a unique set of N degree polynomial coefficients that fit the given set of points. The Polynomial Fitting function takes N coefficients as inputs and creates the predicted polynomial. The residuals at each of the given points are calculated from the predicted polynomial. The sum of the residuals squared is returned as the function value. When the predicted polynomial is equal to the original polynomial, all points fit the predicted polynomial, and the sum of the residuals squared will equal zero. 8

10 Given points p x1, p y1, p x2, p y2,, p xk, p yk to fit an N 1 degree polynomial K f(c 1, c 2,, c N ) = p yi g p xi 2 i=1 where g(x) = c 1 + c 2 x + c 3 x c N x N 1 Figure 2.3 Fitting three points with a cubic function. Residuals are shown with dotted lines. The sum of these residuals squared gives the function evaluation. Performing Tests: The performance of each algorithm was measured in terms of function evaluations, function iterations and rate of failure. Function evaluations measures the total amount of work an algorithm does. Ideally, the execution time of the algorithm itself is negligible compared to that of the function evaluation. For a serial environment, this statistic measures the amount of time needed to run the algorithm. For a parallel environment, this statistic only measures the work across all processors, since many evaluations are done at the same time. Function iterations measures performance of the algorithm in a parallel environment. Assuming there are a sufficient number of processors, each iteration takes the same amount of time, since no more than two serial evaluations are performed on any given processor. 9

11 The rate of failure measures the robustness of an algorithm by counting the number of times the algorithm fails to optimize. Failure can result from converging to a false minimum, exceeding the maximum number of iterations, or failing to converge at all. As the difficulty of the problem increases, it is important that the algorithm not only performs quickly, but consistently and successfully. These three statistics were collected by running performance tests using every combination of problem and algorithm. The Serial Simplex algorithm and Differential Evolution ran 100 trials with random initial conditions for each problem. The Parallel Simplex algorithm ran 100 trials for each problem for every degree of parallelism. Degrees of parallelism ranged from 1 to one less than the total number of vertices. Corresponding runs of different degrees of parallelism begin with the same seeded random initial conditions. The Extended Simplex algorithm ran 10 trials for every degree of parallelism and for every number of extra vertices. Extra vertices ranged from 1 to double the number of parameters. The Quadratic function was tested with 100 dimensions, the Rosenbrock function with 25 dimensions, and the Polynomial Fitting problem with 10 dimensions. 10

12 RESULTS Performance statistics were collected through capturing data on each of the algorithms. The results were categorized into three groups for comparison and analysis, with some results placed in more than one group for different comparisons. The first group of results compared the performance of the Parallel Simplex algorithm to that of the Serial Simplex algorithm on each problem. Figure 3.1 shows the average number of iterations versus degree of parallelism for the Quadratic, Rosenbrock and Polynomial Fitting problems. Figure 3.2 shows the average number of function evaluations versus degree of parallelism for the same problems. Figure 3.3 shows the rate of failure versus degree of parallelism for the Rosenbrock function. Degrees of parallelism are labeled as a percentage of the total parameters since each problem differed in the number of vertices. At 1 degree of parallelism, the Parallel Simplex algorithm transforms a single vertex, making it equivalent to the Serial Simplex algorithm. Figure 3.1 At 1 degree of parallelism, the Parallel Simplex is equivalent to the Serial Simplex. Linearity on the log-log plot indicates linear speedup in a parallel environment. All three problems demonstrate linear speedup up to about a 30% degree of parallelism. 11

13 Figure 3.2 A near horizontal lines indicate little change in the amount of work done. Again, the number of evaluations is nearly constant, or linearly decreasing in the case of the polynomial fit, until about a 30% degree of parallelism. Figure 3.3 The rate of failure demonstrates the difficulty of the problems. The Polynomial Fit failed often while the Quadratic function never failed to converge. The rate of failure is also minimal around a 30% degree of parallelism. 12

14 The second group of results compared the Parallel Simplex algorithm to the Serial Simplex algorithm, Extended Simplex algorithm and Differential Evolution. Tables 3.4, 3.5 and 3.6 present statistics of the four algorithms on the Quadratic, Rosenbrock and Polynomial Fitting problems respectively. For the Parallel Simplex algorithm, the degree of parallelism used in comparison is selected by the result that gives produces the best result. Quadratic Function Average Iterations Average Function Evaluations Failure Rate Serial Simplex % Parallel Simplex (29% degree of parallelism) Parallel Simplex (95% degree of parallelism) Extended Simplex (55% degree of parallelism with 100 extra vertices) % % % Differential Evolution % Table 3.4 The Quadratic function showed no instance of failure throughout the tests. The Parallel and extended Simplex algorithm list their best performance. Differential Evolution was run with default parameters, resulting in non-ideal performance. However, this may be compared to a non-ideal Parallel Simplex test. Rosenbrock Function Average Iterations Average Function Evaluations Failure Rate Serial Simplex % Parallel Simplex (32% degree of parallelism) Parallel Simplex (60% degree of parallelism) Extended Simplex (30% degree of parallelism with 25 extra vertices) % % % Differential Evolution %* Table 3.5 The Rosenbrock function proves to be a more difficult problem, requiring many more iterations, evaluations and showing higher rates of failure at the best performance. * Differential Evolution, a global optimizer, occasionally converged to another minimum at a saddle point, which accounts for its high failure rate. 13

15 Polynomial Fitting Average Iterations Average Function Evaluations Failure Rate Serial Simplex % Parallel Simplex (40% degree of parallelism) Parallel Simplex (80% degree of parallelism) Extended Simplex (50% degree of parallelism with 10 extra vertices) % % % Differential Evolution* Table 3.6 Polynomial fitting is the most difficult of the three problems, with the highest rates of failure. The Parallel Simplex shows a large decrease in evaluations, as opposed to a small increase in the other problems. The Extended Simplex shows even further decreases in all three categories. * Due to unmodified fitting parameters, Differential Evolution failed to converge at all within the iteration limit. The third group of results compares the performance of the Parallel Simplex algorithm to the Extended Parallel Simplex algorithm, with double the vertices, on various problems. Figures 3.7 and 3.8 compare the two on the Quadratic function for iterations and evaluations, respectively. Figures 3.9 focus specifically on the Extended Parallel Simplex algorithm. The color coded graphs represent the number of iterations (a) or evaluations (b) versus the number of extra simplex vertices versus the degree of parallelism. Since the degree of parallelism cannot exceed the total number of vertices, unobtainable points are colored white. 14

16 Figure 3.7 The Extended Simplex algorithm was run with twice the number of vertices for the Quadratic function. The linearity of the Extended Simplex algorithm extends further than that of the Parallel Simplex algorithm. The peak performance of the Extended Simplex algorithm is at 55% degree of parallelism as opposed to 29% in the Parallel Simplex algorithm. Figure 3.8 The horizontal line for the Extended Simplex also extends further than that of the Parallel Simplex in terms of evaluations. Although the Extended Simplex starts with more evaluations than the Parallel Simplex, the Parallel Simplex surpasses the Extended Simplex after reaching the peak. 15

17 Figure 3.9 The Extended Simplex algorithm was tested by varying both the degree of parallelism and the number of extra vertices. White colored points indicate that the degree of parallelism exceeded the total number of vertices. The best performance of iterations occurs at around 100 extra vertices and 100 degrees of parallelism. The best performance of evaluations occurs near the opposite end of the graph, at around 5 extra vertices and 5 degrees of parallelism. 16

18 DISCUSSION AND CONCLUSION Measures of Performance: Iterations and functions evaluations are measures of performance under different situations. In an environment with sufficient parallel processes available, iterations are a direct measure of performance, since the average computation time is ideally the same for each iteration. On a log-log scale, a linear plot corresponds to linear speedup, where the speed of the algorithm is directly proportional to degree of parallelism. Function evaluations indicate the total amount of work being done by all processors. At each iteration, each parallel process requires one or two function evaluations. In a parallel environment, this measure is irrelevant to performance since all evaluations are performed at the same time. In a serial environment however, function evaluations are the direct measure of performance. Rate of failure determine the robustness of the algorithm's optimization capabilities. An algorithm with a higher rate of failure requires more tests on a problem before a solution may be determined. A high rate of failure may even counteract the increased performance on individual runs. Similarly, a low rate of failure further increases the performance of an algorithm. Serial Simplex Algorithm vs Parallel Simplex Algorithm: The Parallel Simplex algorithm observes peak performance at approximately a 30% degree of parallelism, for all optimization problems. Up until this peak, parallel processing effectively allows for linear speedup, as shown by the linear decrease in iterations (Figure 3.1). The number of evaluations increases slightly up until the peak (Figure 3.2). After this peak, iterations and evaluations both increase significantly. Additionally, Parallel Simplex algorithm 17

19 shows a decrease in the rate of failure. On the Rosenbrock function, the algorithm would occasionally converge to a false minimum, and on the Polynomial Fitting problem, the algorithm failed to fit within the iteration limit. As the degree of parallelism increased to the peak, the rate of failure decreased to almost no failures on both problems. Compared to the Serial Simplex algorithm at 1 degree of freedom, the peak of the Parallel Simplex algorithm demonstrates linear speedup at the cost of a slight increase in evaluations. With a sufficient number of parallel processes, the evaluations are negligible. However, even without parallel processing, the peak still exhibits increased robustness at the cost of the slight increase in evaluations. Beyond the peak, there is a clear decrease in overall performance. Extended Simplex Algorithm: The Extended Simplex algorithm expands upon the Parallel Simplex algorithm by adding additional vertices, thus allowing the degrees of parallelism to increase beyond the number of parameters. On the Quadratic function and Polynomial fitting problems, its peak extends up to 55% degrees of parallelism, which shows an even further decrease in iterations compared to that of the Parallel Simplex algorithm (Table 3.4, 3.6). Evaluations and rate of failure also follow a similar pattern. The number of evaluations increases slightly on the Quadratic function, and decreases on the Polynomial fitting as the degree of parallelism approaches the peak. The rate of failure on the Polynomial fitting becomes 0%, compared to the 3% with the Parallel Simplex algorithm. On the Rosenbrock function, however, the peak still lies at approximately a 30% degree of parallelism (Table 3.5). The number of evaluations again increases slightly up to the peak, but the rate of failure is greater than that of the Parallel Simplex algorithm. On these failures, the 18

20 Extended Simplex algorithm converged to a false minimum, much like Differential Evolution. When calculating the centroid of the Extended Simplex, the extra vertices contribute more to the direction the transformation, which points to either the true minimum or a false minimum. Compared to the Parallel Simplex, the Extended Simplex is less likely to change directions of its search and search that direction more aggressively. While this increases the rate at which the algorithm converges, it also increases the chance of converging to false minimums. The Extended Simplex algorithm shows promising results, but depends on the given problem. Given a sufficient number of parallel processers, the Extended Simplex algorithm may increase its performance over the Parallel Simplex and Serial Simplex algorithm, but may also encounter decreased performance, as seen on the Rosenbrock function. Unlike the Parallel Simplex algorithm, an Extended Simplex with 1 degree of parallelism does not necessarily increase robustness, and yet comes with the cost of a much greater increase of evaluations, making it impractical for serial environments. Parallel Simplex Algorithm vs Differential Evolution: Differential evolution is a multimodal optimization algorithm that works by transforming a population of points. Unlike the Nelder-Mead Simplex algorithm, points are transformed per dimension, and are based on a series of random selections. Because iterations in Differential Evolution are calculated differently, they cannot be compared to the Nelder-Mead Simplex iterations (Table 3.4, 3.5). Additionally, Differential Evolution contains parameters for fine tuning the optimization such as the population size, the expansion factor and the crossover constant described in Storn and Price (1997). Since these factors were left at the default values specified in the BUMPS 19

21 software, evaluations cannot be compared either, since these values are not optimized for best performance. Performance results for the Parallel Simplex algorithm under non-optimal conditions were also included to show that the Parallel Simplex can also exhibit poor performance under flawed conditions. Whereas the Nelder-Mead Simplex algorithm is a local optimizer, Differential Evolution is a global optimizer, and will converge to any minimum. On the Rosenbrock function, Differential Evolution did not fail to converge, as shown in Table 3.5, but instead converged to another minimum on the Rosenbrock function. Upon further investigation, this point was determined to be approximately f( 1,1,,1) = 4. Differential Evolution requires further adjustments in order to be properly compared to the Parallel Simplex algorithm. Future Work: The Parallel and Extended Simplex algorithms both show promising results for improved optimization performance. Additional modification of the algorithms may further enhance performance and flexibility. The simplex vertex transformations listed in Figure 1.1 are ones proposed in the original algorithm. Implementing different sets of transformations, or adaptive transformations may increase the performance of the algorithm. The extra vertices in the Extended Simplex algorithm may be utilized for purposes other than search. Implementing the probability of an incorrect transformation could allow the simplex to escape a false convergence; otherwise the incorrect transformation is corrected in the next iteration. This would maintain the performance of the Parallel Simplex algorithm while increasing robustness on difficult problems. 20

22 REFERENCES Lee, D., & Wiswall, M. (2007). A Parallel Implementation of the Simplex Function Minimization Routine. Computational Economics, 30(2), Retrieved August 9, 2013, from simplex_edit_2_8_2007.pdf Nelder, J., & Mead, R. (1965). A Simplex Method for Function Minimization. The Computer Journal, 7(4), Retrieved August 9, 2013, from Storn, R., & Price, K. (1997). Differential Evolution A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. Journal of Global Optimization, 11,

Four equations are necessary to evaluate these coefficients. Eqn

Four equations are necessary to evaluate these coefficients. Eqn 1.2 Splines 11 A spline function is a piecewise defined function with certain smoothness conditions [Cheney]. A wide variety of functions is potentially possible; polynomial functions are almost exclusively

More information

An Evolutionary Algorithm for Minimizing Multimodal Functions

An Evolutionary Algorithm for Minimizing Multimodal Functions An Evolutionary Algorithm for Minimizing Multimodal Functions D.G. Sotiropoulos, V.P. Plagianakos and M.N. Vrahatis University of Patras, Department of Mamatics, Division of Computational Mamatics & Informatics,

More information

Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient

Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient Optimization Last time Root finding: definition, motivation Algorithms: Bisection, false position, secant, Newton-Raphson Convergence & tradeoffs Example applications of Newton s method Root finding in

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing Solving Traveling Salesman Problem Using Parallel Genetic Algorithm and Simulated Annealing Fan Yang May 18, 2010 Abstract The traveling salesman problem (TSP) is to find a tour of a given number of cities

More information

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems FIFTH INTERNATIONAL CONFERENCE ON HYDROINFORMATICS 1-5 July 2002, Cardiff, UK C05 - Evolutionary algorithms in hydroinformatics An evolutionary annealing-simplex algorithm for global optimisation of water

More information

COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES

COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES COMPSTAT 2004 Symposium c Physica-Verlag/Springer 2004 COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES Tvrdík J. and Křivý I. Key words: Global optimization, evolutionary algorithms, heuristics,

More information

A Multiple-Line Fitting Algorithm Without Initialization Yan Guo

A Multiple-Line Fitting Algorithm Without Initialization Yan Guo A Multiple-Line Fitting Algorithm Without Initialization Yan Guo Abstract: The commonest way to fit multiple lines is to use methods incorporate the EM algorithm. However, the EM algorithm dose not guarantee

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture XV (04.02.08) Contents: Function Minimization (see E. Lohrmann & V. Blobel) Optimization Problem Set of n independent variables Sometimes in addition some constraints

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing 1. Introduction 2. Cutting and Packing Problems 3. Optimisation Techniques 4. Automated Packing Techniques 5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing 6.

More information

Introduction to unconstrained optimization - derivative-free methods

Introduction to unconstrained optimization - derivative-free methods Introduction to unconstrained optimization - derivative-free methods Jussi Hakanen Post-doctoral researcher Office: AgC426.3 jussi.hakanen@jyu.fi Learning outcomes To understand the basic principles of

More information

Nelder-Mead Enhanced Extreme Learning Machine

Nelder-Mead Enhanced Extreme Learning Machine Philip Reiner, Bogdan M. Wilamowski, "Nelder-Mead Enhanced Extreme Learning Machine", 7-th IEEE Intelligent Engineering Systems Conference, INES 23, Costa Rica, June 9-2., 29, pp. 225-23 Nelder-Mead Enhanced

More information

CS 229 Final Project - Using machine learning to enhance a collaborative filtering recommendation system for Yelp

CS 229 Final Project - Using machine learning to enhance a collaborative filtering recommendation system for Yelp CS 229 Final Project - Using machine learning to enhance a collaborative filtering recommendation system for Yelp Chris Guthrie Abstract In this paper I present my investigation of machine learning as

More information

Enhancement of the downhill simplex method of optimization

Enhancement of the downhill simplex method of optimization Enhancement of the downhill simplex method of optimization R. John Koshel Breault Research Organization, Inc., Suite 350, 6400 East Grant Road, Tucson, AZ 8575 * Copyright 2002, Optical Society of America.

More information

SIMULATED ANNEALING TECHNIQUES AND OVERVIEW. Daniel Kitchener Young Scholars Program Florida State University Tallahassee, Florida, USA

SIMULATED ANNEALING TECHNIQUES AND OVERVIEW. Daniel Kitchener Young Scholars Program Florida State University Tallahassee, Florida, USA SIMULATED ANNEALING TECHNIQUES AND OVERVIEW Daniel Kitchener Young Scholars Program Florida State University Tallahassee, Florida, USA 1. INTRODUCTION Simulated annealing is a global optimization algorithm

More information

Hardware-Efficient Parallelized Optimization with COMSOL Multiphysics and MATLAB

Hardware-Efficient Parallelized Optimization with COMSOL Multiphysics and MATLAB Hardware-Efficient Parallelized Optimization with COMSOL Multiphysics and MATLAB Frommelt Thomas* and Gutser Raphael SGL Carbon GmbH *Corresponding author: Werner-von-Siemens Straße 18, 86405 Meitingen,

More information

CS/ECE 566 Lab 1 Vitaly Parnas Fall 2015 October 9, 2015

CS/ECE 566 Lab 1 Vitaly Parnas Fall 2015 October 9, 2015 CS/ECE 566 Lab 1 Vitaly Parnas Fall 2015 October 9, 2015 Contents 1 Overview 3 2 Formulations 4 2.1 Scatter-Reduce....................................... 4 2.2 Scatter-Gather.......................................

More information

5 Classifications of Accuracy and Standards

5 Classifications of Accuracy and Standards 5 Classifications of Accuracy and Standards 5.1 Classifications of Accuracy All surveys performed by Caltrans or others on all Caltrans-involved transportation improvement projects shall be classified

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

MVAPICH2 vs. OpenMPI for a Clustering Algorithm

MVAPICH2 vs. OpenMPI for a Clustering Algorithm MVAPICH2 vs. OpenMPI for a Clustering Algorithm Robin V. Blasberg and Matthias K. Gobbert Naval Research Laboratory, Washington, D.C. Department of Mathematics and Statistics, University of Maryland, Baltimore

More information

336 THE STATISTICAL SOFTWARE NEWSLETTER where z is one (randomly taken) pole of the simplex S, g the centroid of the remaining d poles of the simplex

336 THE STATISTICAL SOFTWARE NEWSLETTER where z is one (randomly taken) pole of the simplex S, g the centroid of the remaining d poles of the simplex THE STATISTICAL SOFTWARE NEWSLETTER 335 Simple Evolutionary Heuristics for Global Optimization Josef Tvrdk and Ivan Krivy University of Ostrava, Brafova 7, 701 03 Ostrava, Czech Republic Phone: +420.69.6160

More information

3 Nonlinear Regression

3 Nonlinear Regression CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic

More information

A Virtual Laboratory for Study of Algorithms

A Virtual Laboratory for Study of Algorithms A Virtual Laboratory for Study of Algorithms Thomas E. O'Neil and Scott Kerlin Computer Science Department University of North Dakota Grand Forks, ND 58202-9015 oneil@cs.und.edu Abstract Empirical studies

More information

Unidimensional Search for solving continuous high-dimensional optimization problems

Unidimensional Search for solving continuous high-dimensional optimization problems 2009 Ninth International Conference on Intelligent Systems Design and Applications Unidimensional Search for solving continuous high-dimensional optimization problems Vincent Gardeux, Rachid Chelouah,

More information

3 Nonlinear Regression

3 Nonlinear Regression 3 Linear models are often insufficient to capture the real-world phenomena. That is, the relation between the inputs and the outputs we want to be able to predict are not linear. As a consequence, nonlinear

More information

Constrained and Unconstrained Optimization

Constrained and Unconstrained Optimization Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical

More information

A Novel Approach to Planar Mechanism Synthesis Using HEEDS

A Novel Approach to Planar Mechanism Synthesis Using HEEDS AB-2033 Rev. 04.10 A Novel Approach to Planar Mechanism Synthesis Using HEEDS John Oliva and Erik Goodman Michigan State University Introduction The problem of mechanism synthesis (or design) is deceptively

More information

AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS

AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS Emre Alpman Graduate Research Assistant Aerospace Engineering Department Pennstate University University Park, PA, 6802 Abstract A new methodology

More information

NEW CERN PROTON SYNCHROTRON BEAM OPTIMIZATION TOOL

NEW CERN PROTON SYNCHROTRON BEAM OPTIMIZATION TOOL 16th Int. Conf. on Accelerator and Large Experimental Control Systems ICALEPCS2017, Barcelona, Spain JACoW Publishing NEW CERN PROTON SYNCHROTRON BEAM OPTIMIZATION TOOL E. Piselli, A. Akroh CERN, Geneva,

More information

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li International Conference on Applied Science and Engineering Innovation (ASEI 215) Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm Yinling Wang, Huacong Li School of Power and

More information

The TinyHPC Cluster. Mukarram Ahmad. Abstract

The TinyHPC Cluster. Mukarram Ahmad. Abstract The TinyHPC Cluster Mukarram Ahmad Abstract TinyHPC is a beowulf class high performance computing cluster with a minor physical footprint yet significant computational capacity. The system is of the shared

More information

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Product Group Dell White Paper February 28 Contents Contents Introduction... 3 Solution Components... 4

More information

Introduction. Optimization

Introduction. Optimization Introduction to Optimization Amy Langville SAMSI Undergraduate Workshop N.C. State University SAMSI 6/1/05 GOAL: minimize f(x 1, x 2, x 3, x 4, x 5 ) = x 2 1.5x 2x 3 + x 4 /x 5 PRIZE: $1 million # of independent

More information

Calibration of Nonlinear Viscoelastic Materials in Abaqus Using the Adaptive Quasi-Linear Viscoelastic Model

Calibration of Nonlinear Viscoelastic Materials in Abaqus Using the Adaptive Quasi-Linear Viscoelastic Model Calibration of Nonlinear Viscoelastic Materials in Abaqus Using the Adaptive Quasi-Linear Viscoelastic Model David B. Smith *, Uday Komaragiri **, and Romil Tanov ** ** * Ethicon Endo-Surgery, Inc., Cincinnati,

More information

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies 55 4 INFORMED SEARCH AND EXPLORATION We now consider informed search that uses problem-specific knowledge beyond the definition of the problem itself This information helps to find solutions more efficiently

More information

ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search

ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search N. Arun & V.Ravi* Assistant Professor Institute for Development and Research in Banking Technology (IDRBT), Castle Hills Road #1,

More information

Complexity Measures for Map-Reduce, and Comparison to Parallel Computing

Complexity Measures for Map-Reduce, and Comparison to Parallel Computing Complexity Measures for Map-Reduce, and Comparison to Parallel Computing Ashish Goel Stanford University and Twitter Kamesh Munagala Duke University and Twitter November 11, 2012 The programming paradigm

More information

Optimization in Brachytherapy. Gary A. Ezzell, Ph.D. Mayo Clinic Scottsdale

Optimization in Brachytherapy. Gary A. Ezzell, Ph.D. Mayo Clinic Scottsdale Optimization in Brachytherapy Gary A. Ezzell, Ph.D. Mayo Clinic Scottsdale Outline General concepts of optimization Classes of optimization techniques Concepts underlying some commonly available methods

More information

Introduction to Stochastic Optimization Methods (meta-heuristics) Modern optimization methods 1

Introduction to Stochastic Optimization Methods (meta-heuristics) Modern optimization methods 1 Introduction to Stochastic Optimization Methods (meta-heuristics) Modern optimization methods 1 Efficiency of optimization methods Robust method Efficiency Specialized method Enumeration or MC kombinatorial

More information

Approximation Methods in Optimization

Approximation Methods in Optimization Approximation Methods in Optimization The basic idea is that if you have a function that is noisy and possibly expensive to evaluate, then that function can be sampled at a few points and a fit of it created.

More information

Introduction to optimization methods and line search

Introduction to optimization methods and line search Introduction to optimization methods and line search Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi How to find optimal solutions? Trial and error widely used in practice, not efficient and

More information

Online Supplement to Minimax Models for Diverse Routing

Online Supplement to Minimax Models for Diverse Routing Online Supplement to Minimax Models for Diverse Routing James P. Brumbaugh-Smith Douglas R. Shier Department of Mathematics and Computer Science, Manchester College, North Manchester, IN 46962-1276, USA

More information

Multidimensional Minimization

Multidimensional Minimization C. C. Kankelborg, 1999-11-18 (rev. 2009-Mar-24) I. INTRODUCTION Multidimensional Minimization This lecture is devoted to the task of minimization in N dimensions, which may be stated as follows: For a

More information

Review of the Robust K-means Algorithm and Comparison with Other Clustering Methods

Review of the Robust K-means Algorithm and Comparison with Other Clustering Methods Review of the Robust K-means Algorithm and Comparison with Other Clustering Methods Ben Karsin University of Hawaii at Manoa Information and Computer Science ICS 63 Machine Learning Fall 8 Introduction

More information

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA Chapter 1 : BioMath: Transformation of Graphs Use the results in part (a) to identify the vertex of the parabola. c. Find a vertical line on your graph paper so that when you fold the paper, the left portion

More information

The Game of Criss-Cross

The Game of Criss-Cross Chapter 5 The Game of Criss-Cross Euler Characteristic ( ) Overview. The regions on a map and the faces of a cube both illustrate a very natural sort of situation: they are each examples of regions that

More information

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR Detecting Missing and Spurious Edges in Large, Dense Networks Using Parallel Computing Samuel Coolidge, sam.r.coolidge@gmail.com Dan Simon, des480@nyu.edu Dennis Shasha, shasha@cims.nyu.edu Technical Report

More information

HAM: A HYBRID ALGORITHM PREDICTION BASED MODEL FOR ELECTRICITY DEMAND

HAM: A HYBRID ALGORITHM PREDICTION BASED MODEL FOR ELECTRICITY DEMAND HAM: A HYBRID ALGORITHM PREDICTION BASED MODEL FOR ELECTRICITY DEMAND 1 WAHAB MUSA, 2 SALMAWATY TANSA 1 Assoc. Prof., Department of Electrical Engineering, UNIVERSITAS NEGERI GORONTALO 2 Lektor., Department

More information

A new Optimization Algorithm for the Design of Integrated Circuits

A new Optimization Algorithm for the Design of Integrated Circuits EUROCON 2007 The International Conference on Computer as a Tool Warsaw, September 9-12 A new Optimization Algorithm for the Design of Integrated Circuits Jernej Olenšek, Árpád Bűrmen, Janez Puhan, Tadej

More information

The Bisection Method versus Newton s Method in Maple (Classic Version for Windows)

The Bisection Method versus Newton s Method in Maple (Classic Version for Windows) The Bisection Method versus (Classic Version for Windows) Author: Barbara Forrest Contact: baforres@uwaterloo.ca Copyrighted/NOT FOR RESALE version 1.1 Contents 1 Objectives for this Lab i 2 Approximate

More information

Key Concepts: Economic Computation, Part II

Key Concepts: Economic Computation, Part II Key Concepts: Economic Computation, Part II Brent Hickman Fall, 2009 The purpose of the second section of these notes is to give you some further practice with numerical computation in MATLAB, and also

More information

OPTIMIZING A VIDEO PREPROCESSOR FOR OCR. MR IBM Systems Dev Rochester, elopment Division Minnesota

OPTIMIZING A VIDEO PREPROCESSOR FOR OCR. MR IBM Systems Dev Rochester, elopment Division Minnesota OPTIMIZING A VIDEO PREPROCESSOR FOR OCR MR IBM Systems Dev Rochester, elopment Division Minnesota Summary This paper describes how optimal video preprocessor performance can be achieved using a software

More information

Recent Developments in the Design and Optimization of Constant Force Electrical Contacts

Recent Developments in the Design and Optimization of Constant Force Electrical Contacts Recent Developments in the Design and Optimization of Constant orce Electrical Contacts John C. Meaders * Stephen P. Harston Christopher A. Mattson Brigham Young University Provo, UT, 84602, USA Abstract

More information

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods

More information

Massively Parallel Approximation Algorithms for the Knapsack Problem

Massively Parallel Approximation Algorithms for the Knapsack Problem Massively Parallel Approximation Algorithms for the Knapsack Problem Zhenkuang He Rochester Institute of Technology Department of Computer Science zxh3909@g.rit.edu Committee: Chair: Prof. Alan Kaminsky

More information

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Tool 1: Standards for Mathematical ent: Interpreting Functions CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Name of Reviewer School/District Date Name of Curriculum Materials:

More information

Topics in Machine Learning

Topics in Machine Learning Topics in Machine Learning Gilad Lerman School of Mathematics University of Minnesota Text/slides stolen from G. James, D. Witten, T. Hastie, R. Tibshirani and A. Ng Machine Learning - Motivation Arthur

More information

Ingredients of Change: Nonlinear Models

Ingredients of Change: Nonlinear Models Chapter 2 Ingredients of Change: Nonlinear Models 2.1 Exponential Functions and Models As we begin to consider functions that are not linear, it is very important that you be able to draw scatter plots,

More information

Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods

Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods Franz Hamilton Faculty Advisor: Dr Timothy Sauer January 5, 2011 Abstract Differential equation modeling is central

More information

Optimizing the TracePro Optimization Process

Optimizing the TracePro Optimization Process Optimizing the TracePro Optimization Process A TracePro Webinar December 17, 2014 Presenter Presenter Dave Jacobsen Sr. Application Engineer Lambda Research Corporation Moderator Mike Gauvin Vice President

More information

Chapter 7: Computation of the Camera Matrix P

Chapter 7: Computation of the Camera Matrix P Chapter 7: Computation of the Camera Matrix P Arco Nederveen Eagle Vision March 18, 2008 Arco Nederveen (Eagle Vision) The Camera Matrix P March 18, 2008 1 / 25 1 Chapter 7: Computation of the camera Matrix

More information

Measuring the Processing Performance of NetSniff

Measuring the Processing Performance of NetSniff Measuring the Processing Performance of NetSniff Julie-Anne Bussiere *, Jason But Centre for Advanced Internet Architectures. Technical Report 050823A Swinburne University of Technology Melbourne, Australia

More information

High Performance Computing: Tools and Applications

High Performance Computing: Tools and Applications High Performance Computing: Tools and Applications Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology Lecture 15 Numerically solve a 2D boundary value problem Example:

More information

Alaska Mathematics Standards Vocabulary Word List Grade 7

Alaska Mathematics Standards Vocabulary Word List Grade 7 1 estimate proportion proportional relationship rate ratio rational coefficient rational number scale Ratios and Proportional Relationships To find a number close to an exact amount; an estimate tells

More information

Error Analysis, Statistics and Graphing

Error Analysis, Statistics and Graphing Error Analysis, Statistics and Graphing This semester, most of labs we require us to calculate a numerical answer based on the data we obtain. A hard question to answer in most cases is how good is your

More information

Minimum Bounding Boxes for Regular Cross-Polytopes

Minimum Bounding Boxes for Regular Cross-Polytopes Minimum Bounding Boxes for Regular Cross-Polytopes Salman Shahid Michigan State University shahids1@cse.msu.edu Dr. Sakti Pramanik Michigan State University pramanik@cse.msu.edu Dr. Charles B. Owen Michigan

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

A Nelder-Mead Tuner for Svm

A Nelder-Mead Tuner for Svm A Nelder-Mead Tuner for Svm prepared by: Kester Smith approved by: reference: issue: 1 revision: 0 date: 2009-03-13 status: Draft Abstract Read at your own risk, as this is a working document and has not

More information

Classification: Linear Discriminant Functions

Classification: Linear Discriminant Functions Classification: Linear Discriminant Functions CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Discriminant functions Linear Discriminant functions

More information

Sampling Large Graphs for Anticipatory Analysis

Sampling Large Graphs for Anticipatory Analysis Sampling Large Graphs for Anticipatory Analysis Lauren Edwards*, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller IEEE High Performance Extreme Computing Conference September 16, 2015

More information

Random Search Report An objective look at random search performance for 4 problem sets

Random Search Report An objective look at random search performance for 4 problem sets Random Search Report An objective look at random search performance for 4 problem sets Dudon Wai Georgia Institute of Technology CS 7641: Machine Learning Atlanta, GA dwai3@gatech.edu Abstract: This report

More information

6 Model selection and kernels

6 Model selection and kernels 6. Bias-Variance Dilemma Esercizio 6. While you fit a Linear Model to your data set. You are thinking about changing the Linear Model to a Quadratic one (i.e., a Linear Model with quadratic features φ(x)

More information

Simplicial Global Optimization

Simplicial Global Optimization Simplicial Global Optimization Julius Žilinskas Vilnius University, Lithuania September, 7 http://web.vu.lt/mii/j.zilinskas Global optimization Find f = min x A f (x) and x A, f (x ) = f, where A R n.

More information

Additive Manufacturing Defect Detection using Neural Networks

Additive Manufacturing Defect Detection using Neural Networks Additive Manufacturing Defect Detection using Neural Networks James Ferguson Department of Electrical Engineering and Computer Science University of Tennessee Knoxville Knoxville, Tennessee 37996 Jfergu35@vols.utk.edu

More information

Workload Characterization Techniques

Workload Characterization Techniques Workload Characterization Techniques Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 Jain@cse.wustl.edu These slides are available on-line at: http://www.cse.wustl.edu/~jain/cse567-08/

More information

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS

6.2 DATA DISTRIBUTION AND EXPERIMENT DETAILS Chapter 6 Indexing Results 6. INTRODUCTION The generation of inverted indexes for text databases is a computationally intensive process that requires the exclusive use of processing resources for long

More information

Exploration vs. Exploitation in Differential Evolution

Exploration vs. Exploitation in Differential Evolution Exploration vs. Exploitation in Differential Evolution Ângela A. R. Sá 1, Adriano O. Andrade 1, Alcimar B. Soares 1 and Slawomir J. Nasuto 2 Abstract. Differential Evolution (DE) is a tool for efficient

More information

Assessing the Quality of the Natural Cubic Spline Approximation

Assessing the Quality of the Natural Cubic Spline Approximation Assessing the Quality of the Natural Cubic Spline Approximation AHMET SEZER ANADOLU UNIVERSITY Department of Statisticss Yunus Emre Kampusu Eskisehir TURKEY ahsst12@yahoo.com Abstract: In large samples,

More information

Exploring Econometric Model Selection Using Sensitivity Analysis

Exploring Econometric Model Selection Using Sensitivity Analysis Exploring Econometric Model Selection Using Sensitivity Analysis William Becker Paolo Paruolo Andrea Saltelli Nice, 2 nd July 2013 Outline What is the problem we are addressing? Past approaches Hoover

More information

The Running Time of Programs

The Running Time of Programs The Running Time of Programs The 90 10 Rule Many programs exhibit the property that most of their running time is spent in a small fraction of the source code. There is an informal rule that states 90%

More information

1. How many white tiles will be in Design 5 of the pattern? Explain your reasoning.

1. How many white tiles will be in Design 5 of the pattern? Explain your reasoning. Algebra 2 Semester 1 Review Answer the question for each pattern. 1. How many white tiles will be in Design 5 of the pattern Explain your reasoning. 2. What is another way to represent the expression 3.

More information

A Parameter Study for Differential Evolution

A Parameter Study for Differential Evolution A Parameter Study for Differential Evolution ROGER GÄMPERLE SIBYLLE D MÜLLER PETROS KOUMOUTSAKOS Institute of Computational Sciences Department of Computer Science Swiss Federal Institute of Technology

More information

A study of mesh sensitivity for crash simulations: comparison of manually and batch meshed models

A study of mesh sensitivity for crash simulations: comparison of manually and batch meshed models 4. LS-DYNA Anwenderforum, Bamberg 25 Modellierung A study of mesh sensitivity for crash simulations: comparison of manually and batch meshed models Marc Ratzel*, Paul Du Bois +, Lars A. Fredriksson*, Detlef

More information

Sorting. Bubble Sort. Selection Sort

Sorting. Bubble Sort. Selection Sort Sorting In this class we will consider three sorting algorithms, that is, algorithms that will take as input an array of items, and then rearrange (sort) those items in increasing order within the array.

More information

Optimizing Data Locality for Iterative Matrix Solvers on CUDA

Optimizing Data Locality for Iterative Matrix Solvers on CUDA Optimizing Data Locality for Iterative Matrix Solvers on CUDA Raymond Flagg, Jason Monk, Yifeng Zhu PhD., Bruce Segee PhD. Department of Electrical and Computer Engineering, University of Maine, Orono,

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Parallel Performance Studies for a Clustering Algorithm

Parallel Performance Studies for a Clustering Algorithm Parallel Performance Studies for a Clustering Algorithm Robin V. Blasberg and Matthias K. Gobbert Naval Research Laboratory, Washington, D.C. Department of Mathematics and Statistics, University of Maryland,

More information

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Evolutionary Computation Algorithms for Cryptanalysis: A Study Evolutionary Computation Algorithms for Cryptanalysis: A Study Poonam Garg Information Technology and Management Dept. Institute of Management Technology Ghaziabad, India pgarg@imt.edu Abstract The cryptanalysis

More information

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7. Chapter 7: Derivative-Free Optimization Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.5) Jyh-Shing Roger Jang et al.,

More information

Introduction to ANSYS DesignXplorer

Introduction to ANSYS DesignXplorer Lecture 4 14. 5 Release Introduction to ANSYS DesignXplorer 1 2013 ANSYS, Inc. September 27, 2013 s are functions of different nature where the output parameters are described in terms of the input parameters

More information

Time Complexity Analysis of the Genetic Algorithm Clustering Method

Time Complexity Analysis of the Genetic Algorithm Clustering Method Time Complexity Analysis of the Genetic Algorithm Clustering Method Z. M. NOPIAH, M. I. KHAIRIR, S. ABDULLAH, M. N. BAHARIN, and A. ARIFIN Department of Mechanical and Materials Engineering Universiti

More information

10.4 Downhill Simplex Method in Multidimensions

10.4 Downhill Simplex Method in Multidimensions 408 Chapter 10. Minimization or Maximization of Functions du=(*df)(u); Now all the housekeeping, sigh. if (fu = x) a=x; else b=x; MOV3(v,fv,dv, w,fw,dw) MOV3(w,fw,dw, x,fx,dx) MOV3(x,fx,dx,

More information

1. INTRODUCTION ABSTRACT

1. INTRODUCTION ABSTRACT Copyright 2008, Society of Photo-Optical Instrumentation Engineers (SPIE). This paper was published in the proceedings of the August 2008 SPIE Annual Meeting and is made available as an electronic preprint

More information

2. On classification and related tasks

2. On classification and related tasks 2. On classification and related tasks In this part of the course we take a concise bird s-eye view of different central tasks and concepts involved in machine learning and classification particularly.

More information

ALGEBRA II A CURRICULUM OUTLINE

ALGEBRA II A CURRICULUM OUTLINE ALGEBRA II A CURRICULUM OUTLINE 2013-2014 OVERVIEW: 1. Linear Equations and Inequalities 2. Polynomial Expressions and Equations 3. Rational Expressions and Equations 4. Radical Expressions and Equations

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

Planting the Seeds Exploring Cubic Functions

Planting the Seeds Exploring Cubic Functions 295 Planting the Seeds Exploring Cubic Functions 4.1 LEARNING GOALS In this lesson, you will: Represent cubic functions using words, tables, equations, and graphs. Interpret the key characteristics of

More information

Using LoggerPro. Nothing is more terrible than to see ignorance in action. J. W. Goethe ( )

Using LoggerPro. Nothing is more terrible than to see ignorance in action. J. W. Goethe ( ) Using LoggerPro Nothing is more terrible than to see ignorance in action. J. W. Goethe (1749-1832) LoggerPro is a general-purpose program for acquiring, graphing and analyzing data. It can accept input

More information

CHAPTER 3: Data Description

CHAPTER 3: Data Description CHAPTER 3: Data Description You ve tabulated and made pretty pictures. Now what numbers do you use to summarize your data? Ch3: Data Description Santorico Page 68 You ll find a link on our website to a

More information