University, Surrey, BC, Canada Published online: 19 Jun To link to this article:
|
|
- June Hodges
- 5 years ago
- Views:
Transcription
1 This article was downloaded by: [Simon Fraser University] On: 20 June 2013, At: 02:51 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: Registered office: Mortimer House, Mortimer Street, London W1T 3JH, UK Engineering Optimization Publication details, including instructions for authors and subscription information: Modification of DIRECT for highdimensional design problems Arash Tavassoli a, Kambiz Haji Hajikolaei a, Soheil Sadeqi a, G. Gary Wang a & Erik Kjeang a a School of Mechatronic Systems Engineering, Simon Fraser University, Surrey, BC, Canada Published online: 19 Jun To cite this article: Arash Tavassoli, Kambiz Haji Hajikolaei, Soheil Sadeqi, G. Gary Wang & Erik Kjeang (2013): Modification of DIRECT for high-dimensional design problems, Engineering Optimization, DOI: / X To link to this article: PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
2 Engineering Optimization, Modification of DIRECT for high-dimensional design problems Arash Tavassoli, Kambiz Haji Hajikolaei, Soheil Sadeqi, G. Gary Wang* and Erik Kjeang School of Mechatronic Systems Engineering, Simon Fraser University, Surrey, BC, Canada (Received 23 November 2012; final version received 1 April 2013) DIviding RECTangles (DIRECT), as a well-known derivative-free global optimization method, has been found to be effective and efficient for low-dimensional problems. When facing high-dimensional blackbox problems, however, DIRECT s performance deteriorates. This work proposes a series of modifications to DIRECT for high-dimensional problems (dimensionality d > 10). The principal idea is to increase the convergence speed by breaking its single initialization-to-convergence approach into several more intricate steps. Specifically, starting with the entire feasible area, the search domain will shrink gradually and adaptively to the region enclosing the potential optimum. Several stopping criteria have been introduced to avoid premature convergence. A diversification subroutine has also been developed to prevent the algorithm from being trapped in local minima. The proposed approach is benchmarked using nine standard highdimensional test functions and one black-box engineering problem. All these tests show a significant efficiency improvement over the original DIRECT for high-dimensional design problems. Keywords: global optimization; DIRECT method; high dimensional problems 1. Introduction Global optimization (GO) methods can be roughly classified into deterministic and stochastic approaches. Stochastic methods use random sampling; hence different runs may result in different outcomes for an identical problem. Genetic algorithm (GA) (Goldberg 1989), simulated annealing (SA) (Kirkpatrick, Gelatt, and Vecchi 1983) and particle swarm optimization (PSO) (Kennedy and Eberhart 1995) are well-known representatives of this class. In contrast, deterministic methods work based on a predetermined sequence of point sampling, converging to the global optimum; therefore different runs result in the identical answer for the same optimization problem. Branch and bound (Lawler and Wood 1966) and DIviding RECTangles (DIRECT) (Jones, Perttunen, and Stuckman 1993; Jones 2001) are examples of this category. This work aims to optimize highdimensional, expensive and black-box (HEB) functions. In science and engineering, these three factors make the optimization procedure very challenging. First, high dimensionality makes the search space huge and the systematic searching intractable. This results in a difficulty called the curse-of-dimensionality. Secondly, computationally expensive problems are those with timeconsuming procedures of function evaluations and usually consist of a simulation as finite element analysis (FEA) or computational fluid dynamics (CFD). This factor brings up a limitation in the number of function calls for application of optimization in practice. Lastly, black-box functions are those with no explicit function or formula that makes gradient-based optimization methods *Corresponding author. gary_wang@sfu.ca 2013 Taylor & Francis
3 2 A. Tavassoli et al. impossible to use. One possible way of dealing with HEB problems is to make use of metamodels. Mode pursuing sampling (MPS) (Wang, Shan, and Wang 2004) is a metamodel-based optimization method that integrates a global metamodel with a local metamodel, dynamically interlinked by a discriminative sampling approach. It shows very good performance for expensive black-box problems but has difficulties with high dimensionality. Different types of high-dimensional model representation (HDMR) (Rabitz and Alis 1999; Shan and Wang 2010; Alis and Rabitz 2001) have been introduced by researchers and are identified as potential metamodels for high-dimensional problems (Shan and Wang 2010a). These methods, however, are only for metamodelling, and are not standalone optimization approaches. Recently, Shan and Wang (2010b) published a review article in which the techniques for optimizing HEB problems are reviewed in detail. The challenges and the most promising approaches are discussed. They believe that there is no mature method for optimizing HEB problems. In this article, DIRECT is chosen as a method that has the potential to be modified and used for HEB problems. While the plain DIRECT is a derivative-free method, it can be used for black-box optimizations. The main and only problem of DIRECT is its exponentially increasing demand for function evaluations with the increase in the number of variables, which is exactly the focus of this work. Motivated by a modification to Lipschitzian optimization, DIRECT was first developed by Jones and colleagues in 1993 (Jones, Perttunen, and Stuckman 1993; Jones 2001). Based on a spacepartitioning scheme, the algorithm works as a deterministic GO routine, performing simultaneous global exploration and local exploitation. Following the introduction of this method in early 1990s, several authors tried to study the behaviour of DIRECT with the aim of improving its performance. Gablonsky (2001) tried to improve it by modifying the original method and combining it with another routine known as implicit filtering. Gablonsky and Kelley (2001) proposed a form of DIRECT that is strongly biased towards local search, which performed well for problems with a single global minimum and only a few local optima. Huyer and Neumaier (1999) implemented the idea behind DIRECT and presented a GO algorithm based on multilevel coordinate search. Finkel and Kelley (2004) analysed the convergence behaviour of DIRECT and proved a subsequential convergence result for this algorithm. More recently, Chiter (2006a, 2006b) proposed a new version of potentially optimal intervals for the DIRECT algorithm. Finally, Deng and Ferris (2007) tried to extend this method for noisy functions and adopted a new approach that replicates multiple function evaluations per point and takes an average to reduce functional uncertainty. Meanwhile, some authors were looking at the applications of this method. Zhu and Bogy (2004) modified DIRECT to handle tolerances and to deal with hidden constraints. Thereafter, they used the modified algorithm in a hard disc drive air-bearing design, and in a similar fashion Lang, Liu, and Yang (2009) used DIRECT in their uniformly redundant arrays (URA) design process. Although all these authors have modified DIRECT for different purposes and they work well for those specified aims none of them works efficiently for high-dimensional problems. Intractability of systematic searching caused by high dimensionality still exists in the modified versions and this limits DIRECT to low-dimensional problems. While DIRECT works effectively on most low-dimensional problems, a remarkable decrease in its performance would be seen on high-dimensional cost functions. It is found that the deterministic space-covering behaviour of DIRECT, besides its parallel global and local search routines, makes it a very slow solution strategy for optimizing high-dimensional problems. This issue can also be observed in all modified versions of DIRECT. Although DIRECT is capable of reaching the optimal region, the process needs significantly more function evaluations for high-dimensional problems and specifically for those with a large search domain. In this article, a series of modifications to the DIRECT algorithm has been proposed to make it amenable for high-dimensional problems. In this work, the core DIRECT code is the version of DIRECT written in MATLAB, by Finkel (2004). A few modifications are made in the main code (as discussed in Section 3.1) and the rest
4 Engineering Optimization 3 remains unchanged. DIRECT in flowcharts and descriptions would refer to the above-mentioned MATLAB code. 2. DIRECT The DIRECT method is a derivative-free algorithm, dealing with problems in the form of: min f (x) (1) s.t. x L x x U in which x L and x U are lower and upper bounds, respectively. It begins with scaling the search domain into a unit hypercube. This transformation would simplify the analysis, and allows precomputation and storage of common values used repeatedly in calculations. The algorithm initiates its search by sampling the objective function at the centre of the entire design space. Subsequently, the domain is trisected into three smaller hyperrectangles and two new centre points are sampled. The centre point of each hyperrectangle would be considered as its representing point. In each of the iterations, the potentially optimal hyperrectangle is being identified and partitioned into a set of smaller domains, by trisecting it with respect to the longest coordinate it possesses (Figure 1). The identification of a potentially optimal hyperrectangle would be based on its size and the value of the objective function at its centre. Thus, potentially optimal hyperrectangles either have low function values at their centres or are large enough to be good targets for global search. In other words, if α represents the size of the hyperrectangle, calculated as the distance from the centre point to the corner point of the hyperrectangle, and assuming H as the index set of existing hyperrectangles, a hyperrectangle i H is called a potentially optimal candidate if there exists a constant ξ so that: f (c i ) ξα i f (c j ) ξα j, j H (2) f (c i ) ξα i f min ε f min (3) in which f min is the lowest function value available and ε is a non-sensitive value typically set as 1e 4 (Deng and Ferris 2007). A graphical interpretation of this process is illustrated in Figure 2. These two selection criteria correspond to the selection of the lower convex hull of this graph. Figure 1. DIRECT optimization algorithm (Deng and Ferris 2007).
5 4 A. Tavassoli et al. Figure 2. Identifying the potentially optimal hyperrectangles (Deng and Ferris 2007). Assuming an infinite number of iterations, DIRECT is proven to converge to the global optimum as long as the objective function is continuous or at least continuous in the neighbourhood of the global optimum. Readers are encouraged to see Jones, Perttunen, and Stuckman (1993) for a comprehensive description of DIRECT. 3. High-dimensional DIRECT This work proposes a modified DIRECT for high-dimensional problems; the proposed method is thus referred to as high-dimensional DIRECT, or HD-DIRECT. The main idea is to break the algorithm s approach towards the optimum from one single initialization-to-convergence step into several steps. In each step, it will advance to a closer solution and finally find the optimum. Stopping criteria must be established in order to pass the solutions from one step to the next. A summary of this procedure is outlined in Figure 3. Note in this work that the term iteration refers to the individual hyperrectangle division and sampling steps inside the original DIRECT code, while a cycle means a complete set of iterations in DIRECT, convergence analysis and corresponding domain adjustment (see Section 3.2) DIRECT core code In each single cycle, DIRECT would be called, and the early answers would be saved and used for further analysis. The DIRECT process remains intact except for an update of its stopping criteria. The static maximum allowable number of function evaluations (NFE) criterion in the original DIRECT has been replaced by a dynamic one. It is known that DIRECT is capable of reaching the optimum region in relatively few iterations, but shows slow convergence to the actual optimum. This behaviour is dramatically magnified when it deals with high-dimensional problems. As a means to eliminate this drawback, a secondary criterion has been added, which will terminate the program as soon as it sees a comparatively small difference in results. In other words, it stops if either of the two following cases happens: f Last iteration f f t L[ ] (4) f Last iteration f t S [ ] (5)
6 Engineering Optimization 5 Figure 3. Flowchart of the high-dimensional-direct algorithm. in which flast iteration = f i in the last iteration f = Average of f i in last 3 10 iterations The averaging of function values is being done dynamically among the last three to 10 prior iterations, based on the number of cycles it has gone through. Evidently, early cycles need rough approximation, while later ones will need higher accuracy. For the initial cycle, only three prior iterations are used for calculating the average f values. In the following cycles, this number gradually increases by one at each cycle until reaching 10, which is then used for all ensuring cycles until convergence. As mentioned earlier, the maximum allowable NFE will also change according to the progress of convergence. Starting with a relatively high value, it will change to 1.3 times the NFE in which
7 6 A. Tavassoli et al. the previous cycle has terminated. This criterion prevents DIRECT from wasting a large number of NFE, while the factor of 1.3 ensures that the NFE will not approach an undesirable small value, especially in the last cycles. DIRECT, including these two stopping criteria, will be called DIRECT core code from now on Result analysis and domain reconstruction As the proposed method terminates DIRECT in a cycle with the new stopping criteria, the history of iterative change in objective function and variables will be redirected for further analysis. The idea is to focus on the region encircling the optimum. In one dimension, it means relocating the bounds closer to the optimum of the last cycle. In this way, the regions with no point of interest will be crossed out gradually. In n-dimensional problems, this will be done by analysing the history of each variable individually. Every single variable x i (i = 1, 2,..., n) would be inspected for a steady-state behaviour. Remark 1 where Steady trend of x i means: xlast iteration x x Predefined tolerance(default = 10 8 ) (6) xi = x i in the last iteration of the former cycle x = Average of x i in last n iterations of the former cycle (n = number of variables) In case of a steady trend of the ith variable, its bounds will shrink before commencing the next cycle. Remark 2 Domain shrinkage and bound adjustment happens in two steps: Step 1: Assuming a steady trend of x i, its domain will be divided by: Division Factor = 2 1/m (7) where m shows the number of variables that have shown the steady trend in that cycle. Therefore, up to here, it divides the entire n-dimensional search domain by an overall factor of 2 in each cycle. Over and above that, a secondary reduction in search space will account for the recurring steady trend in one direction, i.e. if x i had shown this trend in the previous k cycles, search domain will shrink in the ith direction by a factor of 2 k 1 as well. Step 2: Having the search domain shrunk to its new size, the bounds will be set in a way that: x i(previous cycle) = (UB + LB) new cycle 2 (8) This corresponds to the next cycle s search starting from the previous cycle s optimum point. The same bound relocation will happen for the variables with no shrinkage in size. Figure 4 shows a schematic view of the bound allocation algorithm.
8 Engineering Optimization 7 Figure 4. Schematic view of domain change with x at the centre of the new space Diversification subroutine and f analysis What is being done is domain restructuring to exclude some domains from forthcoming cycles, and as the cycles go on, such exclusion significantly helps to speed up the search. On the one hand, it saves a great number of function evaluations by focusing on regions with lower function values. But on the other hand, there could be a chance of being trapped in a local minimum and overlooking the region with the true optimum, although the probability is low. To avoid this potential pitfall, a diversification subroutine has been proposed, which generates random points in the excluded areas, picks the one with minimum f value and runs DIRECT in a domain enclosing the candidate, compare its f with the current cycle s f and replaces the f Cycle with the new f if it shows a smaller function value. As shown in Figure 5, in each cycle there is a chance of entering the diversification subroutine.a constant probability is defined, shown as PR in the flowchart. Entering the function, 10 n(n = number of variables) sets of X vectors ( X =[x 1, x 2,..., x n ]) will be randomly generated, the function would be evaluated on each set and the one with lowest value of f would be selected. Remark 3 The randomly generated vectors of X: (1) must necessarily be in the initial domain of interest (2) cannot be in the search domain of the last cycle. Similar to the domain allocation process of Section 3.2 Remark 2, a new search domain would be established enclosing the chosen X. Remark 4 Based on the position of X, the domain assignment schemes would differ and are explained below. As illustrated in Figure 6, this new search domain will have the random X at its centre and is limited by either the bounds of the initial domain (Case 1) or the bounds of the last cycle s search domain (Case 2). A specific case is when one or more variables (but not all n variables) possess a value inside the previous search region. In this case, a portion (or all) of the previous search domain (the shaded area) will necessarily be included as well (Case 3). Finally, DIRECT Core Code will run on the new domain and the result will be compared to the f of the previous cycle. A better answer will immediately result in a jump to the new region. The process will continue with the new domain. Remark 5 The cyclic procedure will stop and the optimum will show up as soon as any of these three conditions occur: (1) f variation in two consecutive cycles become less than a defined tolerance (the same tolerance as accuracy defining tolerance in Section 3.1, t s ): f Cycle f Cycle 1 Predefined tolerance (default ) (9)
9 8 A. Tavassoli et al. Figure 5. Diversification subroutine. Figure 6. Schematic view of domain allocation for the random X. (2) All n variables show steady trend in the last cycle (based on Remark 1 definition). (3) The number of cycles exceeds the maximum allowable number of cycles. 4. Performance test results The principal objective of this article was to increase the performance of the DIRECT algorithm for high-dimensional problems. A series of modifications has been proposed and HD-DIRECT can
10 Engineering Optimization 9 Table 1. Summary of results. DIRECT stopping criteria (Section 2.1) DIRECT HD-DIRECT Function No. of Theoretical no. Domain variables f t L t S NFE f NFE f 1 [ 2 3] E E , E 04 42, E 04 2 [0 5] E E , E , E 02 3 [ 1 2] E E 02 1,000, , E 01 4 [0 5] E E , E 05 29, E 05 5 [ 4 3] E E 03 46, E 04 17, E 04 6 [ 3 0] E E , E 03 27, E 03 7 [0 7] E E 04 66, E 03 26, E 03 8 [ 30 20] E E , E 03 11, E 03 9 [ 3 7] E E 02 1,051, E 02 22, E 02 Note: NFE = number of function evaluations. find the optimum with remarkably fewer NFE. In order to prove this assertion, the performance of DIRECT has been compared to the modified version on nine standard high-dimensional test problems (Hock and Schittkowski 1980; Schittkowski 1987; Yang 2010; Molga and Smutnicki 2005). In order to show the performance enhancement, this analysis has been performed on four 30-variable, three 20-variable and two 15-variable cases (see Appendix 1). Table 1 shows a summary of these results. It is notable that for a fair comparison, the same stopping criteria in Equations (4) and (5) have been used for the original DIRECT. The value of t L is set to be changing based on the number of variables, and adapts to 10 2 for variable problems, 10 3 for variable problems, and so on. Meanwhile, t s would change based on the desired accuracy of the final result. A smaller t s tolerance will lead to a more accurate answer, using more function evaluations. Detailed values of t s are shown in Table 1. It was important to consider the stochastic effect of the diversification subroutine on the results. Hence, in each case, the result of HD-DIRECT is the average of 10 independent runs, each with a 10% probability of entering the subroutine (PR = 0.1). Finally, although the different level of complexity in these problems dictates dissimilarity in relative enhancement achieved by HD-DIRECT, a notable improvement has been demonstrated for each of the nine test functions. The history of convergence in both methods can clarify the effect of the proposed modification. Figure 7 (a d) shows the convergence trend of two 20-variable and two 30-variable test problems. The same graphs can be plotted for the other five benchmark functions. One can see that in order to reach the same accuracy, the NFE required by the HD-DIRECT is significantly smaller than that for the original DIRECT. Figure 8 illustrates the required NFE in DIRECT versus HD-DIRECT for test function no. 2. In each case, the horizontal axis shows the obtained optimum (and it is known that the theoretical optimum for this test problem is zero), while the vertical axis demonstrates its corresponding NFE. It is evident from these graphs that the proposed method not only decreases the required NFE, but also gives the user the opportunity of reaching more accurate solutions at the cost of a much lower number of samples, e.g. seeking an accuracy of 10 4 instead of 10 2 requires an additional 800,000 NFE in DIRECT, while the same improvement can be attained in HD-DIRECT with 50,000 more samples. The main focus of this modification was to increase the performance of DIRECT on highdimensional problems. To illustrate this achievement, Figure 9 has been plotted for the first test function of Table A1. It shows the required NFE for this scalable benchmark function with different numbers of variables. These stated NFE correspond to an identical accuracy of 10 4 in both methods. As expected, the performance increase for higher number of variables is evident.
11 10 A. Tavassoli et al. (a) (b) (c) (d) Figure 7. Convergence history of test functions nos 1 (a) and 2 (b), each with 30 variables, and nos 5 (c) and 6 (d), each with 20 variables. The corresponding search domain of each is the same as mentioned in Table 1 and PR = 0. Figure 8. Number of function evaluations in DIRECT versus HD-DIRECT for test function no. 2, with 30 variables and search domain as mentioned in Table 1, PR = 0. Vertical axis shows the required NFE for convergence to the approximate values shown on the horizontal axis.
12 Engineering Optimization 11 Figure 9. Required number of function evaluations (NFE) for convergence to an approximate accuracy of shows NFE for conventional DIRECT; shows NFE for respective number for HD-DIRECT. 5. Three-part assembly variation problem After testing the method with nine standard benchmark functions, an engineering problem was selected to study the effectiveness of this method in practice. A three-part assembly variation problem, shown in Figure 10, is chosen (Whitney 2004). Both DIRECT and HD-DIRECT methods are tested for the variation of its specific key characteristic (KC) and the results are compared. The parts can be assembled in different ways. In this example, at the first step, parts A and B are assembled. Subsequently, part C is joined to the subassembly of parts A and B. Each part has one hole, one slot and three clamps as the location fixtures. The fixture locations are input variables of the problem. The distance between the lower left corner of part A and the upper right corner of part C defines the KC, and the six-sigma variation of the KC is the objective function to be minimized. The model is created in 3DCS Variation Analyst software ( last accessed March 27, 2013) with defined dimensions of 400 mm in length and 200 mm in width, while all parts are assumed to be rigid. Holes and pins are assigned with diameters equal to 10 mm and 9 mm, respectively. Tolerances are defined for hole, slot and pin sizes with a range of ±0.5 mm Figure 10. Three-part assembly problem and the related fixtures (Whitney 2004).
13 12 A. Tavassoli et al. and normal distribution. In addition, clamp location tolerances are defined perpendicular to the plates with a range of ±1 mm and normal distribution. Three holes, three slots and nine clamps exist in the model, and to define each of them x and y coordinate values are needed. Therefore, the problem has 30 input variables in total. The six-sigma value of the specified KC is obtained from Monte Carlo simulation in 3DCS, which is considered as a black-box function that should be modelled. The holes, slots and clamps can be located continuously on the plates with some constraints. The first constraint would be the minimum distance between the fixtures and their own plate edges, and also the distance between the fixtures themselves, which is considered to be 10 mm. The second constraint would originate from practical issues in the assembly process, e.g. collision of the robot arms. For this constraint, specific regions are defined for the fixtures on the plates, as shown in Figure 11. Figure 11. Feasible search domain for different parts in a plate (all plates have a conceptually similar feasible search region for each element). Table 2. Assembly variation problem results. f NFE DIRECT ,599 HD-DIRECT Note: NFE = number of function evaluations. Figure 12. Convergence history of three-part assembly variation problem.
14 Engineering Optimization 13 It must be noted that DIRECT is an algorithm that is generally suitable for unconstrained problems. The specified bounds in Figure 11 for the fixtures prevent them from being close to the plate edges. The only possible constraint is the overlap of holes and slots with the clamps. In this article, the problem has been optimized without considering this constraint and the constraint check has been performed on the obtained optimum configuration. Table 2 shows the remarkable difference in the results obtained from DIRECT and HD-DIRECT algorithms; with 20% of function evaluations as required by DIRECT, HD-DIRECT reaches a more accurate optimum solution. Figure 12 is a good demonstration of how HD-DIRECT avoids getting trapped in local optima. While the conventional DIRECT wastes a large number of function evaluations in regions containing local minima (with f = 2.4 and f = 2.14), the proposed approach effectively moves to more attractive regions and converges rapidly to the global optimum. 6. Conclusion DIRECT is found to be slow for high-dimensional problems with an exponentially increasing demand for function evaluations. In this work, the single-step approach of DIRECT was replaced with a series of DIRECT cycles with progressive reduction on the search region. Supplementary stopping criteria help to transfer a premature solution to the analysis section for domain restructuring. In a dynamic manner and based on the convergence history of the prior cycles, the search domain adaptively shifts towards a local optimal region. This prevents extra sampling in unattractive regions. To compensate for the possibility of trapping into a local optimum, a diversification subroutine has been developed which performs random sampling on the excluded regions. The proposed HD-DIRECT has been benchmarked using nine standard test functions as well as a practical assembly problem and the performance increase has been illustrated and discussed. At the end, it is notable that the exponentially increasing demand of DIRECT for function evaluations in high-dimensional problems has been replaced with a relatively linear trend. This makes HD- DIRECT a suitable choice for high-dimensional cost functions, although further improvements are needed to make it more efficient for HEB problems. References Alis, O. F., and H. Rabitz Efficient Implementation of High Dimensional Model Representations. Journal of Mathematical Chemistry 29 (2): Chiter, L. 2006a. DIRECT Algorithm: A New Definition of Potentially Optimal Hyperrectangles. Applied Mathematics and Computation 179: Chiter, L. 2006b. A New Sampling Method in the DIRECT Algorithm. Applied Mathematics and Computation 175: Deng, G., and M. Ferris Extension of the DIRECT Optimization Algorithm for Noisy Functions. In Proceedings of the Simulation Conference. IEEE Conference Publications. Finkel, D Direct Optimization Algorithm, Version 4.0. Accessed February 5, ctk/ Finkel_Direct/Direct.m. Finkel, D. E., and C. T. Kelley Convergence Analysis of the DIRECT Algorithm. Center for Research in Scientific Computation and Department of Mathematics, North Carolina State University, Raleigh, NC. Gablonsky, J. M Modifications of the DIRECT Algorithm. PhD diss., North Carolina State University, Raleigh, NC. Gablonsky, J. M., and C. T. Kelley A Locally-Biased Form of the DIRECT Algorithm. Journal of Global Optimization 21: Goldberg, D. E Genetic Algorithms in Search, Optimization and Machine Learning. Boston: Addison-Wesley. Hock, W., and K. Schittkowski Test Examples for Nonlinear Programming Codes. Journal of Optimization Theory and Applications 30 (1): Huyer, W., and A. Neumaier Global Optimization by Multilevel Coordinate Search. Journal of Global Optimization 14:
15 14 A. Tavassoli et al. Jones, D. R DIRECT Global Optimization Algorithm. In Encyclopedia of Optimization, edited by C.A. Floudas and P. M. Pardalos, Norwell: Kluwer. Jones, D. R., C. D. Perttunen, and B. E. Stuckman Lipschitzian Optimization Without the Lipschitz Constant. Journal of Optimization Theory And Application 79 (1): Kennedy, J., and R. Eberhart Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks. Perth, Australia. Kirkpatrick, S., C. D. Gelatt, and M. P. Vecchi Optimization by Simulated Annealing. Science 220: Lang, H., L. Liu, and Q.Yang Design of URAs by DIRECT Global Optimization Algorithm. Optik 120: Lawler, E. L., and D. E. Wood Branch-and-Bound Methods: A Survey. Operations Research 14: Molga, M., and C. Smutnicki Test Functions for Optimization Needs. Accessed February 5, Rabitz, H., and O. F. Alis General Foundation of High Dimensional Model Representation. Journal of Mathematical Chemistry 25: Schittkowski, K More Test Examples for Nonlinear Programming Codes. New York: Springer. Shan, S., and G. G. Wang. 2010a. Metamodeling for High Dimensional Simulation-Based Design Problems. Journal of Mechanical Design 132: Shan, S., and G. G. Wang. 2010b. Survey of Modeling and Optimization Strategies to Solve Highdimensional Design Problems with Computationally Expensive Black-Box Functions. Structural and Multidisciplinary Optimization 41 (2): Wang, L., S. Shan, and G. G. Wang Mode-Pursuing Sampling Method for Global Optimization on Expensive Black-Box Functions. Journal of Engineering Optimization 36 (4): Whitney, D. E Mechanical Assemblies: Their Design, Manufacture, and Role in Product Development. New York: Oxford University Press. Yang, X.-S Test Problems in Optimization. In Engineering Optimization: An Introduction With Metaheuristic Applications, John Wiley & Sons. Zhu, H., and D. B. Bogy Hard Disc Drive Air Bearing Design: Modified DIRECT Algorithm and its Application to Slider Air Bearing Surface Optimization. Tribology International 37: Appendix 1. Test functions Table A1. Test problems. No. of Theoretical No. Function variables optimum 1 f (x) = (x T Ax) 2, A = diag(1, 2, 3,..., n) f (x) = n 1 xi 2 + [ n ] 2 [ 1 (1/2)ix i + n1 ] 4 (1/2)ix i f (x) = 29 1 [100(x i+1 xi 2)2 + (1 x i ) 2 ] f (x) = n 1 x i (x i 0) f (x) = [ n 1 i 3 (x i 1) 2] f (x) = n 1 i(xi 2 + xi 4) f (x) = 1 exp [ (1/60) n 1 xi 2 ] f (x) = exp((1/n) n 1 cos(2πx i )) 20exp( 0.2 (1/n) n 1 xi 2 ) exp(1) f (x) = n 1 xi
Surrogate-assisted Self-accelerated Particle Swarm Optimization
Surrogate-assisted Self-accelerated Particle Swarm Optimization Kambiz Haji Hajikolaei 1, Amir Safari, G. Gary Wang ±, Hirpa G. Lemu, ± School of Mechatronic Systems Engineering, Simon Fraser University,
More informationFuzzy Optimization of the Constructive Parameters of Laboratory Fermenters
This article was downloaded by: [Bulgarian Academy of Sciences] On: 07 April 2015, At: 00:04 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
More informationx GP Function 6 x 2 x 1
DIRECT Optimization Algorithm User Guide Daniel E. Finkel Center for Research in Scientific Computation North Carolina State University Raleigh, NC 27695-8205 definkel@unity.ncsu.edu March 2, 2003 Abstract
More informationLipschitzian Optimization, DIRECT Algorithm, and Applications
,, and April 1, 2008,, and Outline 1 2 3,, and Outline 1 2 3,, and Function Optimization Problem For a function f : D R d R, find min f(x). x D Simple Bounds Mostly we will assume l i x i u i for all i
More informationCenter-Based Sampling for Population-Based Algorithms
Center-Based Sampling for Population-Based Algorithms Shahryar Rahnamayan, Member, IEEE, G.GaryWang Abstract Population-based algorithms, such as Differential Evolution (DE), Particle Swarm Optimization
More informationMobile Robot Path Planning in Static Environments using Particle Swarm Optimization
Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization M. Shahab Alam, M. Usman Rafique, and M. Umer Khan Abstract Motion planning is a key element of robotics since it empowers
More informationNumerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm
Numerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm Ana Maria A. C. Rocha and Edite M. G. P. Fernandes Abstract This paper extends our previous work done
More informationA *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,
The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure
More informationMeta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization
2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic
More informationResearch Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding
e Scientific World Journal, Article ID 746260, 8 pages http://dx.doi.org/10.1155/2014/746260 Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding Ming-Yi
More informationThree-Dimensional Cylindrical Model for Single-Row Dynamic Routing
MATEMATIKA, 2014, Volume 30, Number 1a, 30-43 Department of Mathematics, UTM. Three-Dimensional Cylindrical Model for Single-Row Dynamic Routing 1 Noraziah Adzhar and 1,2 Shaharuddin Salleh 1 Department
More informationSimultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation
.--- Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Networ and Fuzzy Simulation Abstract - - - - Keywords: Many optimization problems contain fuzzy information. Possibility
More informationAutomatic Export of PubMed Citations to EndNote Sue London a ; Osman Gurdal a ; Carole Gall a a
This article was downloaded by: [B-on Consortium - 2007] On: 20 July 2010 Access details: Access Details: [subscription number 919435511] Publisher Routledge Informa Ltd Registered in England and Wales
More informationStochastic global optimization using random forests
22nd International Congress on Modelling and Simulation, Hobart, Tasmania, Australia, 3 to 8 December 27 mssanz.org.au/modsim27 Stochastic global optimization using random forests B. L. Robertson a, C.
More informationA Random Number Based Method for Monte Carlo Integration
A Random Number Based Method for Monte Carlo Integration J Wang and G Harrell Department Math and CS, Valdosta State University, Valdosta, Georgia, USA Abstract - A new method is proposed for Monte Carlo
More informationTopological Machining Fixture Layout Synthesis Using Genetic Algorithms
Topological Machining Fixture Layout Synthesis Using Genetic Algorithms Necmettin Kaya Uludag University, Mechanical Eng. Department, Bursa, Turkey Ferruh Öztürk Uludag University, Mechanical Eng. Department,
More informationIMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD
IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD Figen Öztoprak, Ş.İlker Birbil Sabancı University Istanbul, Turkey figen@su.sabanciuniv.edu, sibirbil@sabanciuniv.edu
More informationSimplicial Global Optimization
Simplicial Global Optimization Julius Žilinskas Vilnius University, Lithuania September, 7 http://web.vu.lt/mii/j.zilinskas Global optimization Find f = min x A f (x) and x A, f (x ) = f, where A R n.
More informationOptimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading
11 th World Congress on Structural and Multidisciplinary Optimisation 07 th -12 th, June 2015, Sydney Australia Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response
More informationExperimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization
Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical
More informationComponent Level Prediction Versus System Level Measurement of SABER Relative Spectral Response
Utah State University DigitalCommons@USU Space Dynamics Lab Publications Space Dynamics Lab 1-1-2003 Component Level Prediction Versus System Level Measurement of SABER Relative Spectral Response S. Hansen
More informationDepartment of Geography, University of North Texas, Denton, TX, USA. Online publication date: 01 April 2010 PLEASE SCROLL DOWN FOR ARTICLE
This article was downloaded by: [Dong, Pinliang] On: 1 April 2010 Access details: Access Details: [subscription number 920717327] Publisher Taylor & Francis Informa Ltd Registered in England and Wales
More informationA Comparative Study of Genetic Algorithm and Particle Swarm Optimization
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727 PP 18-22 www.iosrjournals.org A Comparative Study of Genetic Algorithm and Particle Swarm Optimization Mrs.D.Shona 1,
More informationGENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM
Journal of Al-Nahrain University Vol.10(2), December, 2007, pp.172-177 Science GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM * Azhar W. Hammad, ** Dr. Ban N. Thannoon Al-Nahrain
More informationCHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES
CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving
More informationStudy on GA-based matching method of railway vehicle wheels
Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2014, 6(4):536-542 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 Study on GA-based matching method of railway vehicle
More informationArtificial Neuron Modelling Based on Wave Shape
Artificial Neuron Modelling Based on Wave Shape Kieran Greer, Distributed Computing Systems, Belfast, UK. http://distributedcomputingsystems.co.uk Version 1.2 Abstract This paper describes a new model
More informationAdvances in Military Technology Vol. 11, No. 1, June Influence of State Space Topology on Parameter Identification Based on PSO Method
AiMT Advances in Military Technology Vol. 11, No. 1, June 16 Influence of State Space Topology on Parameter Identification Based on PSO Method M. Dub 1* and A. Štefek 1 Department of Aircraft Electrical
More informationModule 1 Lecture Notes 2. Optimization Problem and Model Formulation
Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization
More informationStochastic branch & bound applying. target oriented branch & bound method to. optimal scenario tree reduction
Stochastic branch & bound applying target oriented branch & bound method to optimal scenario tree reduction Volker Stix Vienna University of Economics Department of Information Business Augasse 2 6 A-1090
More informationAn evolutionary annealing-simplex algorithm for global optimisation of water resource systems
FIFTH INTERNATIONAL CONFERENCE ON HYDROINFORMATICS 1-5 July 2002, Cardiff, UK C05 - Evolutionary algorithms in hydroinformatics An evolutionary annealing-simplex algorithm for global optimisation of water
More informationOpen Access Research on the Prediction Model of Material Cost Based on Data Mining
Send Orders for Reprints to reprints@benthamscience.ae 1062 The Open Mechanical Engineering Journal, 2015, 9, 1062-1066 Open Access Research on the Prediction Model of Material Cost Based on Data Mining
More informationOrigins of Operations Research: World War II
ESD.83 Historical Roots Assignment METHODOLOGICAL LINKS BETWEEN OPERATIONS RESEARCH AND STOCHASTIC OPTIMIZATION Chaiwoo Lee Jennifer Morris 11/10/2010 Origins of Operations Research: World War II Need
More informationAlgorithm Design (4) Metaheuristics
Algorithm Design (4) Metaheuristics Takashi Chikayama School of Engineering The University of Tokyo Formalization of Constraint Optimization Minimize (or maximize) the objective function f(x 0,, x n )
More information4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.
1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when
More informationSIMULATION APPROACH OF CUTTING TOOL MOVEMENT USING ARTIFICIAL INTELLIGENCE METHOD
Journal of Engineering Science and Technology Special Issue on 4th International Technical Conference 2014, June (2015) 35-44 School of Engineering, Taylor s University SIMULATION APPROACH OF CUTTING TOOL
More informationA PACKAGE FOR DEVELOPMENT OF ALGORITHMS FOR GLOBAL OPTIMIZATION 1
Mathematical Modelling and Analysis 2005. Pages 185 190 Proceedings of the 10 th International Conference MMA2005&CMAM2, Trakai c 2005 Technika ISBN 9986-05-924-0 A PACKAGE FOR DEVELOPMENT OF ALGORITHMS
More informationAn Application of Genetic Algorithm for Auto-body Panel Die-design Case Library Based on Grid
An Application of Genetic Algorithm for Auto-body Panel Die-design Case Library Based on Grid Demin Wang 2, Hong Zhu 1, and Xin Liu 2 1 College of Computer Science and Technology, Jilin University, Changchun
More informationTraffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization
Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization J.Venkatesh 1, B.Chiranjeevulu 2 1 PG Student, Dept. of ECE, Viswanadha Institute of Technology And Management,
More informationParticle Swarm Optimization
Particle Swarm Optimization Gonçalo Pereira INESC-ID and Instituto Superior Técnico Porto Salvo, Portugal gpereira@gaips.inesc-id.pt April 15, 2011 1 What is it? Particle Swarm Optimization is an algorithm
More informationIntroduction to Optimization Using Metaheuristics. Thomas J. K. Stidsen
Introduction to Optimization Using Metaheuristics Thomas J. K. Stidsen Outline General course information Motivation, modelling and solving Hill climbers Simulated Annealing 1 Large-Scale Optimization
More information[Kaur, 5(8): August 2018] ISSN DOI /zenodo Impact Factor
GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES EVOLUTIONARY METAHEURISTIC ALGORITHMS FOR FEATURE SELECTION: A SURVEY Sandeep Kaur *1 & Vinay Chopra 2 *1 Research Scholar, Computer Science and Engineering,
More informationThe 100-Digit Challenge:
The 100-igit Challenge: Problem efinitions and Evaluation Criteria for the 100-igit Challenge Special Session and Competition on Single Objective Numerical Optimization K. V. Price 1, N. H. Awad 2, M.
More informationReconfiguration Optimization for Loss Reduction in Distribution Networks using Hybrid PSO algorithm and Fuzzy logic
Bulletin of Environment, Pharmacology and Life Sciences Bull. Env. Pharmacol. Life Sci., Vol 4 [9] August 2015: 115-120 2015 Academy for Environment and Life Sciences, India Online ISSN 2277-1808 Journal
More informationModeling with Uncertainty Interval Computations Using Fuzzy Sets
Modeling with Uncertainty Interval Computations Using Fuzzy Sets J. Honda, R. Tankelevich Department of Mathematical and Computer Sciences, Colorado School of Mines, Golden, CO, U.S.A. Abstract A new method
More informationTabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach
Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach Carlos A. S. Passos (CenPRA) carlos.passos@cenpra.gov.br Daniel M. Aquino (UNICAMP, PIBIC/CNPq)
More informationACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search
ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search N. Arun & V.Ravi* Assistant Professor Institute for Development and Research in Banking Technology (IDRBT), Castle Hills Road #1,
More informationA nodal based evolutionary structural optimisation algorithm
Computer Aided Optimum Design in Engineering IX 55 A dal based evolutionary structural optimisation algorithm Y.-M. Chen 1, A. J. Keane 2 & C. Hsiao 1 1 ational Space Program Office (SPO), Taiwan 2 Computational
More informationMEASURING SURFACE PROFILE WITH LOW-RESOLUTION DISPLACEMENT LASER SENSORS
MEASURING SURFACE PROFILE WITH LOW-RESOLUTION DISPLACEMENT LASER SENSORS J. Chen, R. Ward and G. Waterworth Leeds Metropolitan University, Faculty of Information & Engineering Systems Calverley Street,
More informationA Naïve Soft Computing based Approach for Gene Expression Data Analysis
Available online at www.sciencedirect.com Procedia Engineering 38 (2012 ) 2124 2128 International Conference on Modeling Optimization and Computing (ICMOC-2012) A Naïve Soft Computing based Approach for
More informationA Data Classification Algorithm of Internet of Things Based on Neural Network
A Data Classification Algorithm of Internet of Things Based on Neural Network https://doi.org/10.3991/ijoe.v13i09.7587 Zhenjun Li Hunan Radio and TV University, Hunan, China 278060389@qq.com Abstract To
More informationInclusion of Aleatory and Epistemic Uncertainty in Design Optimization
10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization Sirisha Rangavajhala
More informationIntroduction to Optimization Using Metaheuristics. The Lecturer: Thomas Stidsen. Outline. Name: Thomas Stidsen: Nationality: Danish.
The Lecturer: Thomas Stidsen Name: Thomas Stidsen: tks@imm.dtu.dk Outline Nationality: Danish. General course information Languages: Danish and English. Motivation, modelling and solving Education: Ph.D.
More informationAn improved PID neural network controller for long time delay systems using particle swarm optimization algorithm
An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm A. Lari, A. Khosravi and A. Alfi Faculty of Electrical and Computer Engineering, Noushirvani
More informationHybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques
Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques Nasser Sadati Abstract Particle Swarm Optimization (PSO) algorithms recently invented as intelligent optimizers with several highly
More informationDriven Cavity Example
BMAppendixI.qxd 11/14/12 6:55 PM Page I-1 I CFD Driven Cavity Example I.1 Problem One of the classic benchmarks in CFD is the driven cavity problem. Consider steady, incompressible, viscous flow in a square
More informationIEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY 2001 41 Brief Papers An Orthogonal Genetic Algorithm with Quantization for Global Numerical Optimization Yiu-Wing Leung, Senior Member,
More informationPLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use:
This article was downloaded by: [Lee, Gi-Dong] On: 15 June 2011 Access details: Access Details: [subscription number 938667207] Publisher Taylor & Francis Informa Ltd Registered in England and Wales Registered
More informationMAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS
In: Journal of Applied Statistical Science Volume 18, Number 3, pp. 1 7 ISSN: 1067-5817 c 2011 Nova Science Publishers, Inc. MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS Füsun Akman
More information5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing
1. Introduction 2. Cutting and Packing Problems 3. Optimisation Techniques 4. Automated Packing Techniques 5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing 6.
More informationFormal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T.
Although this paper analyzes shaping with respect to its benefits on search problems, the reader should recognize that shaping is often intimately related to reinforcement learning. The objective in reinforcement
More informationREAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN
REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty
More informationArgha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.
Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial
More informationQUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION
International Journal of Computer Engineering and Applications, Volume VIII, Issue I, Part I, October 14 QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION Shradha Chawla 1, Vivek Panwar 2 1 Department
More informationFITTING PIECEWISE LINEAR FUNCTIONS USING PARTICLE SWARM OPTIMIZATION
Suranaree J. Sci. Technol. Vol. 19 No. 4; October - December 2012 259 FITTING PIECEWISE LINEAR FUNCTIONS USING PARTICLE SWARM OPTIMIZATION Pavee Siriruk * Received: February 28, 2013; Revised: March 12,
More information15. Cutting plane and ellipsoid methods
EE 546, Univ of Washington, Spring 2012 15. Cutting plane and ellipsoid methods localization methods cutting-plane oracle examples of cutting plane methods ellipsoid method convergence proof inequality
More informationA Two-Phase Global Optimization Algorithm for Black-Box Functions
Baltic J. Modern Computing, Vol. 3 (2015), No. 3, pp. 214 224 A Two-Phase Global Optimization Algorithm for Black-Box Functions Gražina GIMBUTIENĖ, Antanas ŽILINSKAS Institute of Mathematics and Informatics,
More informationModified Particle Swarm Optimization
Modified Particle Swarm Optimization Swati Agrawal 1, R.P. Shimpi 2 1 Aerospace Engineering Department, IIT Bombay, Mumbai, India, swati.agrawal@iitb.ac.in 2 Aerospace Engineering Department, IIT Bombay,
More informationInternational Journal of Advance Engineering and Research Development
Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative
More informationHandling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization
Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Richa Agnihotri #1, Dr. Shikha Agrawal #1, Dr. Rajeev Pandey #1 # Department of Computer Science Engineering, UIT,
More informationNon-deterministic Search techniques. Emma Hart
Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting
More informationSPATIAL OPTIMIZATION METHODS
DELMELLE E. (2010). SPATIAL OPTIMIZATION METHODS. IN: B. WHARF (ED). ENCYCLOPEDIA OF HUMAN GEOGRAPHY: 2657-2659. SPATIAL OPTIMIZATION METHODS Spatial optimization is concerned with maximizing or minimizing
More informationModel Parameter Estimation
Model Parameter Estimation Shan He School for Computational Science University of Birmingham Module 06-23836: Computational Modelling with MATLAB Outline Outline of Topics Concepts about model parameter
More informationA Novel Approach to Planar Mechanism Synthesis Using HEEDS
AB-2033 Rev. 04.10 A Novel Approach to Planar Mechanism Synthesis Using HEEDS John Oliva and Erik Goodman Michigan State University Introduction The problem of mechanism synthesis (or design) is deceptively
More informationTracking Minimum Distances between Curved Objects with Parametric Surfaces in Real Time
Tracking Minimum Distances between Curved Objects with Parametric Surfaces in Real Time Zhihua Zou, Jing Xiao Department of Computer Science University of North Carolina Charlotte zzou28@yahoo.com, xiao@uncc.edu
More informationOptimizing Clustering Algorithm in Mobile Ad hoc Networks Using Simulated Annealing
Optimizing Clustering Algorithm in Mobile Ad hoc Networks Using Simulated Annealing Damla Turgut Begumhan Turgut, Ramez Elmasri and Than V. Le School of EECS Dept of Computer Science & Engineering University
More informationFUZZY C-MEANS ALGORITHM BASED ON PRETREATMENT OF SIMILARITY RELATIONTP
Dynamics of Continuous, Discrete and Impulsive Systems Series B: Applications & Algorithms 14 (2007) 103-111 Copyright c 2007 Watam Press FUZZY C-MEANS ALGORITHM BASED ON PRETREATMENT OF SIMILARITY RELATIONTP
More informationImage Compression: An Artificial Neural Network Approach
Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and
More informationFeeder Reconfiguration Using Binary Coding Particle Swarm Optimization
488 International Journal Wu-Chang of Control, Wu Automation, and Men-Shen and Systems, Tsai vol. 6, no. 4, pp. 488-494, August 2008 Feeder Reconfiguration Using Binary Coding Particle Swarm Optimization
More informationw KLUWER ACADEMIC PUBLISHERS Global Optimization with Non-Convex Constraints Sequential and Parallel Algorithms Roman G. Strongin Yaroslav D.
Global Optimization with Non-Convex Constraints Sequential and Parallel Algorithms by Roman G. Strongin Nizhni Novgorod State University, Nizhni Novgorod, Russia and Yaroslav D. Sergeyev Institute of Systems
More informationSimplex of Nelder & Mead Algorithm
Simplex of N & M Simplex of Nelder & Mead Algorithm AKA the Amoeba algorithm In the class of direct search methods Unconstrained (although constraints can be added as part of error function) nonlinear
More informationParticle Swarm Optimization
Dario Schor, M.Sc., EIT schor@ieee.org Space Systems Department Magellan Aerospace Winnipeg Winnipeg, Manitoba 1 of 34 Optimization Techniques Motivation Optimization: Where, min x F(x), subject to g(x)
More informationTruss structural configuration optimization using the linear extended interior penalty function method
ANZIAM J. 46 (E) pp.c1311 C1326, 2006 C1311 Truss structural configuration optimization using the linear extended interior penalty function method Wahyu Kuntjoro Jamaluddin Mahmud (Received 25 October
More informationHybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for Classification Sriram G. Sanjeevi 1, A. Naga Nikhila 2,Thaseem Khan 3 and G. Sumathi 4 1 Associate Professor, Dept. of CSE, National Institute
More informationInternational Journal of Modern Engineering and Research Technology
ABSTRACT The N queen problems is an intractable problem and for a large value of 'n' the problem cannot be solved in polynomial time and thus is placed in 'NP' class. Various computational approaches are
More informationGlobal Solution of Mixed-Integer Dynamic Optimization Problems
European Symposium on Computer Arded Aided Process Engineering 15 L. Puigjaner and A. Espuña (Editors) 25 Elsevier Science B.V. All rights reserved. Global Solution of Mixed-Integer Dynamic Optimization
More informationProceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012
Proceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012 Solving Assembly Line Balancing Problem in the State of Multiple- Alternative
More informationThe Genetic Algorithm for finding the maxima of single-variable functions
Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding
More informationSIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION
SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION Kamil Zakwan Mohd Azmi, Zuwairie Ibrahim and Dwi Pebrianti Faculty of Electrical
More informationJob Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search
A JOB-SHOP SCHEDULING PROBLEM (JSSP) USING GENETIC ALGORITHM (GA) Mahanim Omar, Adam Baharum, Yahya Abu Hasan School of Mathematical Sciences, Universiti Sains Malaysia 11800 Penang, Malaysia Tel: (+)
More informationEvolutionary Algorithms
A Hybrid Optimization Algorithm With Search Vector Based Automatic Switching Eric Inclan and George S. Dulikravich Florida International University, Miami FL Presented at WCSMO10, Orlando, Florida, May
More informationExploration vs. Exploitation in Differential Evolution
Exploration vs. Exploitation in Differential Evolution Ângela A. R. Sá 1, Adriano O. Andrade 1, Alcimar B. Soares 1 and Slawomir J. Nasuto 2 Abstract. Differential Evolution (DE) is a tool for efficient
More informationK-Means Clustering With Initial Centroids Based On Difference Operator
K-Means Clustering With Initial Centroids Based On Difference Operator Satish Chaurasiya 1, Dr.Ratish Agrawal 2 M.Tech Student, School of Information and Technology, R.G.P.V, Bhopal, India Assistant Professor,
More informationGenetic Algorithm for Circuit Partitioning
Genetic Algorithm for Circuit Partitioning ZOLTAN BARUCH, OCTAVIAN CREŢ, KALMAN PUSZTAI Computer Science Department, Technical University of Cluj-Napoca, 26, Bariţiu St., 3400 Cluj-Napoca, Romania {Zoltan.Baruch,
More informationCHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM
20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:
More informationRobot Path Planning Method Based on Improved Genetic Algorithm
Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Robot Path Planning Method Based on Improved Genetic Algorithm 1 Mingyang Jiang, 2 Xiaojing Fan, 1 Zhili Pei, 1 Jingqing
More informationCOLLISION-FREE TRAJECTORY PLANNING FOR MANIPULATORS USING GENERALIZED PATTERN SEARCH
ISSN 1726-4529 Int j simul model 5 (26) 4, 145-154 Original scientific paper COLLISION-FREE TRAJECTORY PLANNING FOR MANIPULATORS USING GENERALIZED PATTERN SEARCH Ata, A. A. & Myo, T. R. Mechatronics Engineering
More informationInternational Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)
Performance Analysis of GA and PSO over Economic Load Dispatch Problem Sakshi Rajpoot sakshirajpoot1988@gmail.com Dr. Sandeep Bhongade sandeepbhongade@rediffmail.com Abstract Economic Load dispatch problem
More informationHybrid Optimization Coupling Electromagnetism and Descent Search for Engineering Problems
Proceedings of the International Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2008 13 17 June 2008. Hybrid Optimization Coupling Electromagnetism and Descent Search
More informationParticle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm
Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm Oğuz Altun Department of Computer Engineering Yildiz Technical University Istanbul, Turkey oaltun@yildiz.edu.tr
More information