Optimization of Axle NVH Performance Using the Cross Entropy Method

Similar documents
Optimization of Axle NVH Performance Using Particle Swarm Optimization

The Cross-Entropy Method for Mathematical Programming

The Cross-Entropy Method

Numerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

arxiv: v1 [cs.ne] 22 Mar 2016

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

PARALLEL CROSS-ENTROPY OPTIMIZATION. Dirk P. Kroese. Department of Mathematics University of Queensland Brisbane, QLD 4072, AUSTRALIA

Generating Uniformly Distributed Pareto Optimal Points for Constrained and Unconstrained Multicriteria Optimization

LECTURE NOTES Non-Linear Programming

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

A Development of Hybrid Cross Entropy-Tabu Search Algorithm for Travelling Repairman Problem

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

Hybrid Optimization Coupling Electromagnetism and Descent Search for Engineering Problems

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

Optimization of Tapered Cantilever Beam Using Genetic Algorithm: Interfacing MATLAB and ANSYS

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.

Solving the Capacitated Single Allocation Hub Location Problem Using Genetic Algorithm

Statistical Pattern Recognition

DERIVATIVE-FREE OPTIMIZATION

Sensing Error Minimization for Cognitive Radio in Dynamic Environment using Death Penalty Differential Evolution based Threshold Adaptation

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

The Simple Genetic Algorithm Performance: A Comparative Study on the Operators Combination

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Optimal designs for comparing curves

MINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS

The Genetic Algorithm for finding the maxima of single-variable functions

Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation

ACO and other (meta)heuristics for CO

CHAPTER 5 OPTIMAL TOLERANCE DESIGN WITH ALTERNATIVE MANUFACTURING PROCESS SELECTION

GA is the most popular population based heuristic algorithm since it was developed by Holland in 1975 [1]. This algorithm runs faster and requires les

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY

COUPLING TRNSYS AND MATLAB FOR GENETIC ALGORITHM OPTIMIZATION IN SUSTAINABLE BUILDING DESIGN

CHAPTER 5 STRUCTURAL OPTIMIZATION OF SWITCHED RELUCTANCE MACHINE

AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS

A penalty based filters method in direct search optimization

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Feature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

Standard Error Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization

Contents. Tutorials Section 1. About SAS Enterprise Guide ix About This Book xi Acknowledgments xiii

Outline. CS 6776 Evolutionary Computation. Numerical Optimization. Fitness Function. ,x 2. ) = x 2 1. , x , 5.0 x 1.

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

LS-OPT Current development: A perspective on multi-level optimization, MOO and classification methods

Louis Fourrier Fabien Gaie Thomas Rolf

Parallel Hierarchical Cross Entropy Optimization for On-Chip Decap Budgeting

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading

Stochastic branch & bound applying. target oriented branch & bound method to. optimal scenario tree reduction

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Dr.-Ing. Johannes Will CAD-FEM GmbH/DYNARDO GmbH dynamic software & engineering GmbH

A Simplex Based Parametric Programming Method for the Large Linear Programming Problem

GLOBAL LIKELIHOOD OPTIMIZATION VIA THE CROSS-ENTROPY METHOD WITH AN APPLICATION TO MIXTURE MODELS. Zdravko Botev Dirk P. Kroese

Approximate Evolution Strategy using Stochastic Ranking

The optimum design of a moving PM-type linear motor for resonance operating refrigerant compressor

Prepared By. Handaru Jati, Ph.D. Universitas Negeri Yogyakarta.


Multi-Objective Memetic Algorithm using Pattern Search Filter Methods

Bayesian Estimation for Skew Normal Distributions Using Data Augmentation

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

Sımultaneous estımatıon of Aquifer Parameters and Parameter Zonations using Genetic Algorithm

Time-Domain Dynamic Analysis of Helical Gears with Reduced Housing Model

3 Interior Point Method

Automatically Balancing Intersection Volumes in A Highway Network

Optimization of Noisy Fitness Functions by means of Genetic Algorithms using History of Search with Test of Estimation

Analysis of Directional Beam Patterns from Firefly Optimization

Synthesis of Planar Mechanisms, Part XI: Al-Jazari Quick Return-Motion Mechanism Galal Ali Hassaan Emeritus Professor, Mechanical Design & Production

GRASP. Greedy Randomized Adaptive. Search Procedure

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

EE 553 Term Project Report Particle Swarm Optimization (PSO) and PSO with Cross-over

Dealing with Categorical Data Types in a Designed Experiment

Immune Optimization Design of Diesel Engine Valve Spring Based on the Artificial Fish Swarm

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will

Statistical Pattern Recognition

Efficient Resources Allocation in Technological Processes Using an Approximate Algorithm Based on Random Walk

A heuristic approach of the estimation of process capability indices for non-normal process data using the Burr XII distribution

Jednociljna i višeciljna optimizacija korištenjem HUMANT algoritma

Improving Convergence in Cartesian Genetic Programming Using Adaptive Crossover, Mutation and Selection

ABC Optimization: A Co-Operative Learning Approach to Complex Routing Problems

Multidisciplinary Analysis and Optimization

OPTIMIZATION FOR SURFACE ROUGHNESS, MRR, POWER CONSUMPTION IN TURNING OF EN24 ALLOY STEEL USING GENETIC ALGORITHM

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Excel Scientific and Engineering Cookbook

A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER NONLINEAR PROGRAMMING PROBLEMS

An efficient algorithm for sparse PCA

Two-phase strategies for the bi-objective minimum spanning tree problem

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION

Stochastic global optimization using random forests

Development of a tool for the easy determination of control factor interaction in the Design of Experiments and the Taguchi Methods

Computational study of the step size parameter of the subgradient optimization method

The picasso Package for High Dimensional Regularized Sparse Learning in R

Data Preprocessing. Why Data Preprocessing? MIT-652 Data Mining Applications. Chapter 3: Data Preprocessing. Multi-Dimensional Measure of Data Quality

A Taguchi Approach to Parameter Setting in a Genetic Algorithm for General Job Shop Scheduling Problem

A penalty based filters method in direct search optimization

Transcription:

Optimization of Axle NVH Performance Using the Cross Entropy Method Abstract Glenn Meinhardt Department of Industrial and Systems Engineering Oakland University Rochester, Michigan 48309 Email: gameinha@oakland.edu Sankar Sengupta Department of Industrial and Systems Engineering Oakland University Rochester, Michigan 48309 Email: ssengupta@oakland.edu An approach to optimization of automobile axles for noise, vibration and harshness (NVH) performance based on -of-line testing is presented. The method used, the cross-entropy method, iteratively solves an objective function based on statistical distributions of the indepent variables of the objective function. A Matlab program written by the authors is presented and discussed. The algorithm used within the method is presented along with solutions under different convergence criteria. Introduction Noise, Vibration and Harshness (NVH) performance is a critical quality characteristic for automobile manufacturers (original equipment manufacturers, or OEMs) and driveline component manufacturers alike. A major component of the driveline is the axle. The axle transfers torque from the engine and driveshaft to the wheels. For axle manufacturers, one of the primary NVH metrics is gear whine [1]. To ensure satisfactory gear whine performance when the automobile leaves the factory, many OEMs now require axle assemblies to be tested for gear whine performance at the of the assembly line using an -of-line NVH test (EOLT) prior to shipment to their assembly plants. It is in the best interest of both the OEMs and axle manufacturers to ensure that the vibration levels of axles not only meet the requirement at the EOLT, but that the levels are as low as possible [2]. One way to control the levels at the EOLT is to understand the correlation of the upstream performance variables to the EOLT result. A previous work by the authors examined one such correlation [3, 4] involving the assembly parameters of the axle and the resulting coast-side vibration. This work illustrates the use of the cross entropy method to minimize the EOLT result with the regression equation presented in [4] used as the objective function. The solution of the same problem is presented in other works by the authors using Particle Swarm Optimization [5] and a Genetic Algorithm [6]. 1

The Optimization Problem The desire is to minimize the coast-side vibration given by the regression equation from [4]: Therefore, the optimization problem is written Y(X) = 486.32 2.8049 a1 2.7890 a5 0.19745 c6 (1) + 0.36987 c7 0.29785 d3 0.26230 d6 * = min Y(X) = Y(X*) (2) where * is the optimum value of Y, X = [ a1 a5 c6 c7 d3 d6 ], and X* are the values of X associated with *. The regression equation was derived from 21 samples of data collected from the assembly line. Clearly the regression equation is only valid for the range of data from which it was derived. Therefore, the boundary conditions (constraints) for the optimization problem are the range of each variable from which the objective function was derived. The constraints are taken from range of data in [4] and summarized in Table 1. Equations (1) and (2) along with Table 1 completely define the optimization problem. Now, this problem can be solved very easily deterministically and that solution is given in Table 2. The purpose of this work is to illustrate how the cross entropy method can be used to solve optimization problems. The simple problem presented above and the deterministic solution can be used as a basis for such an illustration, the results of the optimization compared to the deterministic solution. Table 1 The Constraints for the Parameters, X, of the Regression Equation (Various Units, db) The Cross Entropy Method Variable Lower Upper Bound Bound a1 66.8485 69.5424 a5 68.8496 70.9555 c6 40.0000 71.5957 c7 46.0206 78.6900 d3 46.0206 65.1055 d6 56.9020 71.3640 The Cross-Entropy method (CE) was first introduced by Rubinstein [7] and is a well-known evolutionary algorithm involving variance minimization. The name cross-entropy is derived 2

Table 2 The Solution to the Deterministic Form of the Optimization Problem Optimization Method a1 a5 c6 c7 d3 d6 Optimum Solution Y(X*) Deterministic 69.5424 70.9555 71.5957 46.0206 65.1055 71.3640 58.1402 from its use of the Kullback-Leibler divergence which is a measure of the loss of information when one probability distribution is used to approximate another. In CE, the relationship between the fitness value to be optimized, Y, and the controlling variables, X, is not evaluated deterministically, but with probability density functions, f ( ;v) representing X (the associated stochastic problem). The Kullback-Leibler divergence is employed to iteratively update the parameters of the probability density functions minimizing the loss of information as the solution converges to minimum variance and the optimum value. The basic algorithm for CE is a simple two-step iterative process [8, 9]: 1. Define the associated stochastic problem by generating appropriate random samples representing each of the variables of the objective function. 2. Update the parameters of the sampling distributions for the next iteration to move the solution closer to the optimum value. In Step 2 CE utilizes rare event estimation and importance sampling to converge each of the parameters of the probability density functions to their optimum value. A detailed derivation of CE can be found in Rubinstein and Kroese [10] with an excellent example presented by Kothari and Kroese [11]. To solve the problem with CE, it is first necessary to define the associated stochastic problem by replacing the static variables with their stochastic counterparts. It is appropriate to define the stochastic counterparts based on data collected from the assembly line. In previous work [3] it was shown that each variable can be represented by a normal distribution, except d3 due to the bi-modal nature of its distribution. It was explained in [3] that the bi-modal nature of d3 was likely due to the process producing d3 from two sources. For the purposes of this work the optimization method will assume d3 can be represented by one normal distribution. Thus Equation (1) becomes Y(v) = 486.32 2.8049 N( a1, a1) 2.7890 N( a5, a5) 0.19745 N( c6, c6) (3) + 0.36987 N( c7, c7) 0.29785 N( d3, d3) 0.26230 N( d6, d6) 3

To solve Equation (2) given Equation (3), the CE method employs rare event estimation such that l( ) = Pu(Y(X) ) = Eu I {Y(X) } (4) where E is the Expected Value operator and X is a random vector with probability distribution functions, f ( ; u), for u ϵ v where v = [ a1, a1, a5, a5, c6, c6, c7, c7, d3, d3, d6, d6 ] (5) Now l( ) 0 as *. This is the rare event that is estimated under the importance sampling of X X*. CE then adaptively updates and v until the solution converges to the tuple ( *,v*). From Kothari and Kroese [11], for each iteration, i, and known values of vi-1 and with i assigned to be a known quantile of Y(X) under vi-1, a value of is selected such that and PVi-1(Y(X) i) (6) PVi-1 (Y(X) i) 1 (7) The parameter defines the elite samples from the current population that will be used to estimate *. For this work, is chosen to be the 0.01. Again from Kothari and Kroese [11], v is updated in each iteration, i, by deriving v i from the cross-entropy program given by where max v D (v) = max v 1 N N I {Y(X k ) γ i} k=1 ln f(x k ; v) (8) I {Y(Xk ) γ i} = { 1, Y(X k) γ i 0, Y(X k ) > γ i (9) To avoid a sub-optimal solution the convergence is slowed with a slow factor,, such that the updated value of v is given as v i = α v i + (1 α) v i 1 (10) with usually defined to be 0.7 < < 1.0. Here will be assigned a value of 0.75. From Kothari and Kroese [11] for normally distributed values of X, the solution of Equation (8) at each iteration i yields 4

v i = [μ i, σ i] (11) with μ ij = N k=1 X kj NElite j = 1, 2,, p (12) σ ij = N (X kj μ ij ) 2 k=1 N Elite j = 1, 2,, p (13) where N is the index for the number of feasible solutions. The procedure continues until a stopping criteria is met. CE is easily adapted to a spreadsheet, but is more practical within a mathematical programming package such as Matlab. The next section will review the numerical solution of the optimization problem using CE. Numerical Solution Using CE The solution to the optimization problem by CE is conducted within Matlab. The algorithm used to write the program is given in the Appix. The data used to initialize the program are the data used in [4] to derive the regression equation. These data are shown in Table 3. The program is initialized by establishing the parameters for the CE method. These include the number of samples to generate with each generation, N, the percentile defining the elite sample,, the slowing factor,, and the stopping criteria. These values are summarized in Table 4, and are the input to the program. The program provides output of the average values and standard deviations for each parameter at each iteration, the setup parameters, the CPU Time required and the number of iterations required to converge. The solutions from each of five runs of the CE program using the parameters in Table 4 are shown in Table 5. In addition, Figures 1, 2 and 3 show for each run the optimum value, the iterations required to converge to the optimum solution, and the solver time required. From Figure 1, it is clear that the Cross-Entropy method successfully identifies the optimum solution for this problem. Figures 2 and 3 suggest that it may not be necessary to use such strict convergence criteria since there is an impact on solver time and the number of iterations required to converge. The penalty is insignificant compared to the increased precision in the result, if increased precision is desired. In this work, it is desired to achieve precision to four decimal places. CE demonstrates the ability to do this with the more strict convergence criteria. This suggests that if high precision is desired, a good approach to optimization may be to start with more strict convergence criteria. 5

Table 3 The Initial Data From the Assembly Line, Various Units, db Sample a1 a5 c6 c7 d3 d6 NVH 1 67.9588 69.5134 71.5957 60.0000 46.0206 61.5836 81.03 2 66.8485 70.9555 53.9794 56.9020 58.0618 60.8279 78.18 3 67.6042 70.2377 53.9794 66.0206 49.5424 58.0618 86.26 4 68.2995 70.6296 55.5630 62.9226 49.5424 63.5218 76.49 5 67.2346 70.0212 66.4444 60.8279 52.0412 60.0000 82.29 6 69.5424 70.4228 56.9020 60.8279 46.0206 59.0849 75.86 7 68.6273 69.9662 46.0206 62.2789 62.9226 62.9226 80.95 8 68.6273 69.1273 52.0412 59.0849 53.9794 60.0000 80.22 9 68.2995 70.6040 40.0000 70.3703 49.5424 66.8485 83.64 10 68.6273 70.3703 58.0618 77.5012 49.5424 63.5218 83.44 11 68.9432 70.9309 60.0000 78.6900 46.0206 70.1030 82.09 12 67.6042 70.5526 46.0206 76.1236 49.5424 71.3640 85.15 13 68.2995 70.1030 53.9794 68.2995 52.0412 64.6090 80.36 14 67.9588 70.3703 56.9020 71.8213 65.1055 61.5836 77.54 15 68.9432 69.2180 58.0618 72.0412 63.5218 60.8279 82.33 16 68.2995 68.8496 55.5630 67.6042 64.0824 60.0000 80.61 17 68.6273 70.0212 59.0849 69.8272 64.0824 60.8279 75.29 18 69.2480 69.1273 60.0000 53.9794 64.6090 66.8485 70.63 19 67.6042 69.2480 61.5836 61.5836 46.0206 65.1055 82.28 20 67.6042 69.9109 60.0000 58.0618 46.0206 62.9226 80.53 21 68.2995 69.6575 62.9226 46.0206 59.0849 56.9020 72.87 Avg 68.2429 69.9922 56.6050 64.7995 54.1594 62.7365 79.9067 St Dev 0.6723 0.6226 6.9816 8.2071 7.3371 3.7297 3.9578 Max 69.5424 70.9555 71.5957 78.6900 65.1055 71.3640 86.26 Min 66.8485 68.8496 40.0000 46.0206 46.0206 56.9020 70.63 6

Table 4 Initialization Parameters for the CE Method Description Parameter Value Number of Samples in Each Generation N 1000 Elite percentile of the population 0.01 Slowing Factor 0.75 Stopping Criteria (convergence) - Trial 1 Maximum Sample Standard Deviation 0.001 Stopping Criteria (convergence) - Trial 2 Maximum Sample Standard Deviation 0.00001 Table 5 The Average Value of the Parameters and Solution of the Optimization Problem for Each Iteration Run D Iterations Required to Converge Solver Time (sec) a1 a5 c6 c7 d3 d6 Optimum Solution Y(X*) 1 0.001 24 17 69.5423 70.9554 71.5940 46.0230 65.1043 71.3627 58.1427 2 0.001 24 17 69.5423 70.9554 71.5942 46.0212 65.1043 71.3631 58.1420 3 0.001 23 17 69.5423 70.9553 71.5941 46.0216 65.1041 71.3623 58.1425 4 0.001 23 16 69.5423 70.9554 71.5943 46.0215 65.1041 71.3630 58.1423 5 0.001 24 17 69.5423 70.9554 71.5945 46.0213 65.1043 71.3627 58.1420 1 0.00001 35 25 69.5424 70.9555 71.5957 46.0206 65.1054 71.3640 58.1402 2 0.00001 43 31 69.3599 70.9555 71.5957 46.0206 65.1054 71.3640 58.6521 3 0.00001 35 25 69.5424 70.9555 71.5956 46.0206 65.1054 71.3640 58.1402 4 0.00001 35 25 69.5424 70.9555 71.5957 46.0206 65.1054 71.3640 58.1402 5 0.00001 45 32 69.5411 70.9555 71.5957 46.0206 65.1054 71.3639 58.1439 7

Iteratiosn Required to Converge 80 75 Optimum Value, db 70 65 60 55 50 58.1402 45 40 1 2 3 4 5 1 2 3 4 5 0.00001 0.001 1000 58.14 58.65 58.14 58.14 58.14 58.14 58.14 58.14 58.14 58.14 Run D Figure 1 The Optimum Value Identified by the Cross-Entropy Method by the Convergence Criteria and Run Number (The dashed line is the optimum value found deterministically) 50 45 40 35 30 25 20 15 10 5 0 1 2 3 4 5 1 2 3 4 5 0.00001 0.001 1000 35 43 35 35 45 24 24 23 23 24 Run D Figure 2 The Number of Iterations Required to Identify the Optimum Value Using the Cross- Entropy Method by the Convergence Criteria and Run Number 8

35 30 Solver Time, Seconds 25 20 15 10 5 0 1 2 3 4 5 1 2 3 4 5 0.00001 0.001 1000 25 31 25 25 32 17 17 17 16 17 Run D Figure 3 The Solver Time Required to Identify the Optimum Value Using the Cross-Entropy Method by the Convergence Criteria and Run Number Summary The Cross Entropy (CE) method is a relatively new optimization method. This paper illustrates the application of the cross entropy method to a very simple linear regression model. The deterministic solution is used as a means of comparison of the CE results to a known solution. Table 6 shows the comparison. Table 6 shows nearly exact agreement between the classical solution and the one found through CE. As discussed above, restricting the convergence criteria further will improve the precision. It remains, for future work, to confirm that axles built to the optimum conditions indeed produce improved vibration performance. Other papers by the authors illustrate solving the same optimization problem using a Genetic Algorithm [6] and Particle Swarm Optimization [5]. Table 6 A Comparison of the Deterministic Solution of the Optimization Problem to the Best Performance of CE Optimization Method N Iterations to Solve Solver Time (sec) a1 a5 c6 c7 d3 d6 Optimum Solution Y(X*) Deterministic 0 0 < 10 69.5424 70.9555 71.5957 46.0206 65.1055 71.3640 58.1402 Cross-Entropy 1000 35 25 69.5424 70.9555 71.5956 46.0206 65.1054 71.3640 58.1402 9

Appix A detailed algorithm / pseudocode for the cross entropy method 1. Initialize the program: a. Define the number of random samples, N, for X. b. Define the percent of the solutions of Y(X),, that will comprise the elite sample. c. Define the slow factor,. d. Define the stopping criteria. This is chosen to be when the maximum standard deviation across all X and Y(X) is 0.001 db or after 1,000 iterations. 2. Initialize 0j and 0j for j = 1, 2,, p. a. Import the raw data from the assembly line for each parameter. b. Calculate 0j and 0j from the data. 3. Generate N samples Xi from (i-1)j and (i-1)j. a. Evaluate Xi against the constraints and discard infeasible solutions. 4. Calculate Yi(Xi) using Equation (1). a. Calculate ij and ij and compare to the stopping criteria ( ij < 0.001?) b. If True, * = Yi(Xi) and X* = Xi 5. Sort Yi(Xi) and select the percentile elite solutions. a. The number of elite solutions = N Elite 6. Calculate ij and ij from Equations (8) and (9) using the elite samples. 7. Calculate v i from Equation (10) 8. Increment the iteration number and repeat from Step 3 until stopping criteria is met. The Matlab code used for the cross entropy method % Open the data files O = load('nvhdata.mat', '-ASCII'); constraint = load('const.mat', '-ASCII'); % Establish the number of variables (c) and the number of samples (r) [r,c] = size(o); Sample(r,c)=0; SampleStDev(1,c) = 100; SolutionAverage(1,1) = 0; SolutionStDev(1,1) = 0; % Initialize the counters err = 0; x = 0; % Calculate the averages while j < c + 1 while i < r + 1 x = x + O(i,j); 10

Average(j) = x / r; x = 0; Average; %Re-iniitialize counters % Calculate the Standard Deviations while j < c + 1 while i < r + 1 err = err + (O(i,j) - Average(j))^2; StDev(j) = sqrt(err / (r-1)); err = 0; StDev; % Perform the Optimization n = input('how many random samples shall we generate? '); pelite = input('what percentile of feasible solutions shall we use as the Elite Sample? '); stop = input('what is the maximum Standard Deviation desired to achieve the optimum solution? This will apply to all variables. '); slow = input('what weight shall we apply to the new parameters (slow factor)? '); plots = input('shall we create plots at the? ', 's'); % Begin the iterations numiterations = 0; while max(samplestdev) > stop numiterations = numiterations + 1; infeasible = 0; % Generate the new population while j < 7 while i < n + 1 Sample(i,j) = random('norm', Average(j), StDev(j)); Sample(i,7)= -2.8049*Sample(i,1)-2.789*Sample(i,2)-0.19745*Sample(i,3)+0.36987*Sample(i,4)- 0.29785*Sample(i,5)-0.2623*Sample(i,6)+486.32; % Check feasibility of the solutions while i < n + 1 while j < 7 if Sample(i,j) > constraint(2,j) && Sample(i,j) < constraint(1,j) Sample(i,8) = 2; 11

else Sample(i,8) = 1; j = 7; infeasible = infeasible + 1; % Create the array of feasible solutions % Sort the sample first by Column 8 (the feasible solutions, descing order) and then by % Column 7 (the fitness value, ascing order) SortedSample = sortrows(sample,[-8,7]); NumFeasible = n - infeasible - 1; while i < NumFeasible + 1 while j < 8 Feasible(i,j) = SortedSample(i,j); % Calculate the averages of the feasible solutions while j < 8 FeasibleAverage = 0; while i < NumFeasible + 1 FeasibleAverage = FeasibleAverage + Feasible(i,j); SampleAverage(j) = FeasibleAverage / NumFeasible ; SolutionAverage(numIterations,j) = SampleAverage(j); % Calculate the Standard Deviations of the feasible solutions while j < 8 FeasibleStdError = 0; while i < NumFeasible + 1 FeasibleStdError = FeasibleStdError + (Feasible(i,j)- SampleAverage(j))^2; SampleStDev(j) = sqrt(feasiblestderror / (NumFeasible - 1)); SolutionStDev(numIterations,j) = SampleStDev(j); 12

% Calculate the Elite Averages nelite = round(numfeasible * pelite); while j < 8 neliteaverage = 0; while i < nelite + 1 neliteaverage = neliteaverage + Feasible(i,j); EliteAverage(j) = neliteaverage / nelite ; % Calculate the Elite Standard Deviations while j < 8 nelitestderror = 0; while i < nelite + 1 nelitestderror = nelitestderror + (Feasible(i,j)-EliteAverage(j))^2; EliteStDev(j) = sqrt(nelitestderror / (nelite-1)); % Update the Average and Standard Deviation for the next population NumFeasible = 0; Average = slow * EliteAverage + (1 - slow) * SampleAverage; StDev = slow * EliteStDev + (1 - slow) * SampleStDev; % After the stopping criteria has been reached, display the results from each iteration along with the optimum SolutionAverage SolutionStDev numiterations SampleAverage SampleStDev % Write the results to an Excel File xlswrite('solution.xls',solutionaverage,'averages'); xlswrite('solution.xls',solutionstdev,'stdevs'); if plots == 'y' figure plot(solutionaverage) figure plot(solutionstdev) % End of test. Ask to clear the memory. reply = input ('Do you want to clear everything? (y/n)[n]', 's'); if reply == 'y' clear clc 13

elseif isempty(reply) reply = 'n'; Bibliography 1. Sun, Z., et. al., NVH Robustness Design of Axle Systems, SAE Transactions, v. 112, pp. 1746 1754, 2003. 2. Steyer, G., et. al., The Future of NVH Testing An End-User s Perspective, SAE Technical Paper 2005-01- 2270, 2005. 3. Meinhardt, G. and Sengupta, S., Correlation of Axle Build Parameters to End-of-Line NVH Test Performance Part I-Preparing the Multivariate data for Regression Analysis, SAE Technical Paper 2012-01-727, 2012. 4. Meinhardt, G. and Sengupta, S., Correlation of Axle Build Parameters to End-of-Line NVH Test Performance Part II-Multivariate Regression Analysis, SAE Technical Paper 2012-01-728, 2012. 5. Meinhardt G. and Sengupta, S., Optimization of Axle NVH Performance Using Particle Swarm Optimization, Proceedings of the ICAM 2014 May 28-30, 2014, 2014. 6. Meinhardt G. and Sengupta, S., Optimization of Axle NVH Performance Using A Genetic Algorithm, Proceedings of the ICAM 2014 May 28-30, 2014, 2014. 7. Rubinstein, R., Optimization of Computer Simulation Models with Rare Events, European Journal of Operations Research, v. 99, pp. 89 112, 1997. 8. Kroese, D., et. al., The Cross-Entropy Method for Continuous Multi-Extremal Optimization, Methodology in Computing and Applied Probability, v. 8, pp. 383 407, 2006. 9. De Boer, P., et. al., A Tutorial on the Cross-Entropy Method, Annals of Operations Research, v. 134, pp. 19 67, 2005. 10. Rubinstein, R. and Kroese, D., The Cross-Entropy Method, Springer-Verlag, 2004. 11. Kothari, R., and Kroese, D., Optimal Generation Expansion Planning Via the Cross-Entropy Method, Proceedings of the 2009 Winter Conference - IEEE, pp. 1482 1491, 2009. 14