Reliability-Based Topology Optimization with Analytic Sensitivities. Patrick Ryan Clark

Size: px
Start display at page:

Download "Reliability-Based Topology Optimization with Analytic Sensitivities. Patrick Ryan Clark"

Transcription

1 Reliability-Based Topology Optimization with Analytic Sensitivities Patrick Ryan Clark Thesis submitted to the faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Aerospace Engineering Mayuresh J. Patil, Chair Robert A. Canfield Rakesh K. Kapania June 19, 2017 Blacksburg, Virginia Keywords: Reliability-Based Topology Optimization, First-Order Reliability Method, Sensitivity Analysis Copyright 2017, Patrick R. Clark

2 Reliability-Based Topology Optimization with Analytic Sensitivities Patrick Ryan Clark ABSTRACT (Academic) Reliability-Based Design Optimization (RBDO) approaches often use the First-Order Reliability Method (FORM) to efficiently obtain an estimate of the reliability of a system. This approach treats the reliability analysis as a nested optimization problem, where the objective is to compute the most probable point (MPP) by minimizing the distance between the failure surface and the origin of a normalized random space. Numeric gradient calculation of the solution of this nested problem requires an additional solution of the FORM problem for each design variable, an approach which quickly becomes computationally intractable for large scale problems including Reliability-Based Topology Optimization (RBTO). In this thesis, an alternative analytic approach to the analysis and sensitivity of nested optima derived from the Lagrange Multiplier Theorem is explored. This approach leads to a system of nonlinear equations for the MPP analysis for any given set of design variables. Taking the derivative of these equations with respect to a design variable gives a linear system of equations in terms of the implicit sensitivities of the MPP to the design variable where the coefficients of the linear equations depend only on the current MPP. By solving this system, these sensitivities can be obtained without requiring addition solutions of the FORM problem. The proposed approach is demonstrated through several RBDO and RBTO problems.

3 Reliability-Based Topology Optimization with Analytic Sensitivities Patrick Ryan Clark ABSTRACT (General Audience) It is a common practice when designing a system to apply safety factors to the critical failure load or event. These safety factors provide a buffer against failure due to the random or unmodeled behavior, which may lead the system to exceed these limits. However these safety factors are not directly related to the likelihood of a failure event occurring. If the safety factors are poorly chosen, the system may fail unexpectedly or it may have a design which is too conservative. Reliability-Based Design Optimization (RBDO) is an alternative approach which directly considers the likelihood of failure by incorporating a reliability analysis step such as the First-Order Reliability Method (FORM). The FORM analysis requires the solution of an optimization problem however, so implementing this approach into an RBDO routine creates a double-loop optimization structure. For large problems such as Reliability-Based Topology Optimization (RBTO), numeric sensitivity analysis becomes computationally intractable. In this thesis, a general approach to the sensitivity analysis of nested functions is developed from the Lagrange Multiplier Theorem and then applied to several Reliability-Based Design Optimization problems, including topology optimization. The proposed approach is computationally efficient, requiring only a single solution of the FORM problem each iteration.

4 I dedicate this work to Heather. You have always been there for me and I will always be there for you. iv

5 Acknowledgements I would like to acknowledge the contributions from my advisor, Dr. Mayuresh Patil. This thesis would not be possible without his advice and guidance. More importantly, through my regular discussions with Dr. Patil I learned the power of asking questions and the joy of seeking a deeper understanding. I would also like to recognize the influences each member of my committee had on my academic career. I had the pleasure of studying under Dr. Robert Canfield multiple times throughout my undergraduate and graduate career, and it was his courses in design optimization and reliabilitybased structural design which first drew me to this field. Similarly, studying under Dr. Rakesh Kapania has given me an appreciation of the beauty of structural analysis. Lastly, I would like to give special thanks to the Virginia Tech Center for the Enhancement of Engineering Diversity for supporting my graduate studies. Participating in your programs as an undergraduate was a life-changing experience, so I am grateful for the opportunity to give back. It truly has been a pleasure to serve on your team. v

6 Table of Contents 1. Introduction Background Overview of Research Literature Review Deterministic Design Using Factors of Safety Reliability Analysis Key Aspects of Probability Theory Monte-Carlo Simulation Mean Value First-Order Second Moment Method (MVFOSM) First-Order Reliability Method (FORM) Performance Metric Approach (PMA) Reliability-Based Design Optimization (RBDO) Double-Loop Approaches Single-Loop and Decoupling Approaches Sensitivity Analysis Finite Difference Method Complex Step Method Direct and Adjoint Methods First-Order Necessary Conditions Sensitivity Analysis for RBDO Applications Topology Optimization Solid Isotropic Material with Penalization (SIMP) Level-Set Methods Filtering Reliability-Based Topology Optimization (RBTO) Sensitivity of Nested Optima General Derivation Application to the First-Order Reliability Method vi

7 4. Results and Discussion Overview of the Analyses Performed RBDO of a 3-Bar Truss Structure Problem Statement Sensitivity Analysis Demonstration and Validation Deterministic and Reliability-Based Design Optimization Results Topology Optimization of a Benchmark Problem Problem Statement Sensitivity Analysis Solution versus Reliability Index Multiple Random Loads Problem Statement RBTO Solutions Comparison to Deterministic Solutions for Various Factors of Safety Comparison of Analytic and Finite Difference Computation Times RBTO of a Deflection-Limited Cantilever Structure Problem Statement Sensitivity Analysis RBTO Solution Conclusion References Appendix A: Compliance Derivatives A.1 Derivative with Respect to any Pseudo-Density A.2 Derivative with Respect to any Random Load A.3 Second Derivative for any Pair of Random Loads A.4 Mixed Derivative for any Density and Load Pair Appendix B: Displacement Derivatives B.1 Derivative with Respect to any Pseudo-Density B.2 Derivative with Respect to any Random Load B.3 Second Derivative for any Pair of Random Loads B.4 Mixed Derivative for any Density and Load Pair vii

8 List of Figures Figure 1: Axial bar with cross-sectional area and applied load... 5 Figure 2: Rotationally symmetric joint probability density function... 9 Figure 3: Relationship between the probability of failure and the MVFOSM reliability index.. 11 Figure 4: Projection of the failure surface into a random space with two dimensions Figure 5: Rotation of the linearized failure surface Figure 6: Typical RBDO procedure Figure 7: Topology optimization solution for a Mitchell Beam Figure 8: Cantilever beam problem exhibiting checkerboarding Figure 9: RBTO benchmark problem from Rozvany and Maute [1] Figure 10: Probabilistic 3-bar truss problem Figure 11: MCS distribution for the 2.0 horizontal displacement constraint Figure 12: Analytic solution for the RBTO benchmark problem [1] Figure 13: Discretization of the RBDO benchmark problem Figure 14: 4 th Iteration analytic sensitivities on a lognormal scale Figure 15: 4 th Iteration finite difference sensitivities for 10-7 on a lognormal scale Figure 16: Absolute difference between the analytic and finite difference sensitivities Figure 17: RBTO benchmark topology after 1,000 iterations with Figure 18: Convergence of the weight of the RBTO benchmark topology with Figure 19: Convergence of for the RBTO benchmark topology with Figure 20: Norm of the residual of the KKT Equations for the RBTO benchmark with Figure 21: Oscillations of the MPP for the RBTO benchmark with Figure 22: First 10 iterations of the asymmetric RBTO solution Figure 23: Topology obtained for 3 after reformulation to consider multiple MPPs Figure 24: Topology found using the modified approach starting at the asymmetric solution Figure 25: Solutions to the RBTO benchmark problem for different reliability indices Figure 26: Estimation of for the RBTO benchmark problem with Figure 27: Symmetry of the compliance constraint for the RBTO benchmark with Figure 28: Probability density function for the compliance constraint obtained using MCS viii

9 Figure 29: Problem formulation for RBTO with two random loads Figure 30: RBTO solution for two random loads with 1 10 and Figure 31: RBTO solution for two random loads with 1 20 and Figure 32: Deterministic topology for the multiple-load problem with Figure 33: Deterministic topology for the multiple-load problem with Figure 34: Deterministic topology for the multiple-load problem with Figure 35: Computation time for one design iteration versus number of pseudo-densities Figure 36: Problem formulation for the deflection-limited RBTO problem Figure 37: Optimum topology for the deflection-limited cantilever plate problem with Figure 38: Optimum topology for the deflection-limited cantilever plate problem with ix

10 List of Tables Table 1: Summary of parameters for the sensitivity validation study Table 2: FORM solution for the reliability of the stress constraint for bar Table 3: Comparison of MPP sensitivities found using three sensitivity analysis approaches Table 4: Summary of parameters for the deterministic optimization versus RBDO study Table 5: Comparison of 3-bar truss designs using deterministic optimization and RBDO Table 6: Comparison of constraint behavior for similar deterministic and RBDO designs Table 7: Summary of unit-less parameters for the RBTO benchmark problem Table 8: Comparison of finite difference and analytic sensitivities for the RBTO benchmark Table 9: RBTO benchmark reliability indices versus approximated truss angle Table 10: Summary of unit-less parameters for the multiple random loads RBTO problem Table 11: Comparison of deterministic and reliability-based topologies for multiple loads Table 12: Analytic and finite difference computation times versus mesh density Table 13: Summary of unit-less parameters for RBTO of a deflection-limited structure x

11 Nomenclature = Linear density filter weighting matrix = Compliance = Finite element displacement vector = Elastic modulus = Objective function, nested objective function = Generic probability density function = Point force = Generic cumulative density function = Factor of safety = Generic function, nested inequality constraint = Generic inequality constraint = Finite difference step size, nested equality constraint = Generic equality constraint = Hessian matrix,, = Indices for vectors and matrices = Identity matrix = Finite element stiffness matrix = Lagrangian function = Locator matrix = SIMP penalization exponent = Probability of failure = Filter radius = Finite element load vector = Standard normal random variable = Most probable point = Lagrange multiplier = Weighting factor = Generic design variable xi

12 = Generic local minima = Generic random parameter = Generic nested variable = Generic nested minima = Generic nested state variable = Vector which isolates a specified displacement = Reliability index = Target reliability index = Lagrange multiplier, adjoint vector = Mean = Poisson s ratio = Pseudo-density = Standard deviation, stress = Standard normal probability density function Φ = Standard normal cumulative density function xii

13 1. Introduction 1.1 Background In design optimization, a system or structure is reduced to a set of equations in terms of key but unknown characteristics, such as the width of a beam, the stiffness of a spring, or the resistance of an electrical component. After identifying performance metrics to be maximized or minimized as well as constraints on the design, a numerical procedure is then used to find the best feasible solution. Traditionally, this requires the assumption that all aspects of the problem are deterministic. That is, the behavior of every parameter or characteristic is known exactly and does not vary regardless of how many times the system is produced or observed. In reality, there are few systems which behave in this manner. Rather, most real systems have some degree of randomness in their configuration or response. For example, the material properties of a steel beam can vary locally due to changes in the microstructure, or vary on average between different production runs due to inexact control of the alloying agents. Similarly, the geometry of a machined part is commonly expected to fall within a range of values specified by a set of tolerances as a result of imperfect process control. The common practice for addressing these uncertainties while retaining the deterministic analysis is to apply a factor of safety to the limit states of the system or make conservative assumptions. The result is a buffer against failure which covers all reasonably likely scenarios when chosen correctly. The question facing the engineer is then a question of what factor of safety is appropriate? While the factor of safety approach is simple to implement, it is only an indirect measure of the actual probability of failure of the system. Choosing too low of a factor of safety can result in a design which fails more frequently than desired while choosing too high results in a design which is inefficient. Typically, experience and extensive testing are required to determine appropriate safety factors, however this can be costly and time consuming. 1

14 Furthermore, even with the correct choice of factors of safety, the optimal design based on these assumptions is not optimal in terms of obtaining the best reliable solution. An alternative approach is to formulate the optimization process in terms of the probabilities of failure, which has led to the fields of Reliability-Based Design Optimization (RBDO) and Robust Design Optimization (RDO). These methods directly consider the statistics of the system, with RBDO introducing probability as a constraint and RDO using statistical measures such as variance as a target for minimization or maximization. These approaches are appealing since the uncertainties present in the system can be directly quantified; however this is also the challenge of these methods: how can existing optimization tools be modified to consider parameters which no longer hold a fixed value, but instead may hold one of infinitely many values? Historically, a number of approaches have been used to perform this probabilistic analysis. These include simulation methods such as Monte Carlo simulation as well as approximation methods, including the First-Order Reliability Method. These methods typically require greater computational effort than deterministic analysis since each random parameter adds another dimension to the problem. The First-Order Reliability Method is particularly noteworthy, since this approach reformulates the probabilistic analysis into a deterministic optimization problem. However, when implemented into an RBDO or RDO problem the result is a nested structure where the FORM optimization problem has to be solved at every iteration of the design optimization problem. While this is computationally expensive by itself, the problem is compounded during the sensitivity analysis step, particularly if finite difference derivatives are used. In this case, the FORM problem needs to be solved once each iteration to get a baseline then re-solved for each design variable. Thus, the computational cost rapidly grows with the number of design variables. Unfortunately, this limits the use of RBDO techniques for topology optimization, since these problems typically have thousands or more design variables. If an efficient sensitivity analysis procedure can be derived, then Reliability-Based Topology Optimization (RBTO) has the potential to combine the best of both of these methods. By itself, topology optimization is a powerful tool since the distribution of the material or another property is treated as unknown, thereby removing as many assumptions as possible from the analysis. This expands the design space, potentially leading to novel, efficient designs. If an efficient sensitivity analysis procedure can be found, then these designs can be constrained to satisfy a target reliability level, potentially leading to novel designs which are both efficient and reliable. 2

15 1.2 Overview of Research The aim of this research is to develop a general procedure for calculating the gradients of the solution of a nested optimization problem to the design variables of the outer problem. Ideally, such a procedure would not require any further information than what is already computed as a part of the initial solution to the nested optimization problem, particularly if the nested optimum requires the solution of a finite element or other large-scale problem. The procedure should not require any additional solutions of the nested optimization problem either. After developing a generalized procedure, its implementation into a RBDO problem using the First-Order Reliability Method can be explored, including the efficient solution of Reliability-Based Topology Optimization problems. The remainder of this thesis is as follows. First, a survey of existing research in the field of Reliability-Based Topology Optimization is presented. Particular focus is given to the first-order reliability method, the solid-isotropic material with penalization formulation for topology optimization, and existing approaches to the sensitivity analysis used to solve RBTO problems. An approach for the sensitivity analysis derived from the Lagrange Multiplier Theorem is then presented. Several example problems are then formulated and solved. These include a probabilistic variant of the classic 3-bar truss problem [2], the RBTO benchmark derived by Rozvany and Maute [1], and a probabilistic nodal displacement problem representative of topology optimization scenarios encountered in industry. 3

16 2. Literature Review A common assumption made when designing or analyzing a system is that the configuration and behavior of the system can be expressed exactly by a set of defined parameters. This implies that there is perfect information about the system and that it will perform identically regardless of how many times it is produced or used. This assumption is at the core of most engineering analyses, ranging from simple hand calculations to large finite element problems. Few, if any, real systems are entirely deterministic however. Instead, there is usually some degree of randomness inherent in the system. This randomness is potentially derived from many different sources, which can include manufacturing imperfections, varying environmental conditions, or complex and chaotic interactions with other systems. The result is that there is a degree of uncertainty associated with any analysis using this assumption. When derived from random sources, the uncertainty is referred to as being aleatory and is typically irreducible. This differs from epistemic uncertainty, which is uncertainty which can be reduced by having better information. 2.1 Deterministic Design Using Factors of Safety The simplest procedure for addressing uncertainties is to apply a factor of safety to the deterministic limiting conditions during the design process. The factor of safety is simply the ratio of the actual failure state to a more restrictive design failure state: (1) The result is a buffer between the design criteria and the actual limit state. Ideally, this buffer is sufficiently large enough to include all reasonably likely realizations of the system. This buffer also accounts for uncertainty which may be present in the actual limit state, such as how the yield 4

17 stress of a material is commonly reported as a statistical quantity with the expectation that samples may deviate by some amount. It is important to note that the factor of safety is only indirectly related to the random behavior of the system. While a factor of safety of 1.25 may be sufficient for one application, there is no guarantee that it will be sufficient for another. As an example of this, consider the axial bar shown below in Figure 1. Figure 1: Axial bar with cross-sectional area and applied load The stress in this axial bar is commonly known to be the ratio of the applied load to the crosssectional area of the bar : (2) A constraint on the stress can then be introduced to size the area of the cross-section. For example, it is common to require that the stress is less than a limiting value such as the yield stress or the critical buckling stress: (3) A factor of safety can then be applied to the design to account for uncertainties in the applied load, limit stress, and the geometry of the bar: (4) 5

18 Equation (4) can then be used to choose an appropriate cross-sectional area given an assumed value for the load. In operation, the bar will not fail as long as the actual load falls within the buffer created by the factor of safety: (5) It is important to note that while Equation (5) states a failure condition for the actual load, it does not provide any information about the reliability of the system, defined as the frequency with which the actual load will exceed this limit. If the actual load varies significantly from the design load, then there is the potential that failure will occur far more frequently than desired, even with the factor of safety. Similarly, the factor of safety may be providing too much of a buffer if the actual load does not frequently deviate from the design load, indicating that the cross-section area could be reduced further while satisfying the reliability requirements. Thus, appropriate choice of a factor of safety is essential. For a simple design problem with few sources of randomness it may be easy to select a factor of safety which balances the frequency of failure with the optimality of the design, however for complex systems the random behavior may be difficult to estimate without extensive testing. 2.2 Reliability Analysis While deterministic analysis seeks to answer the questions of if or when failure will occur, reliability analysis instead analyzes the frequency with which failure occurs. This requires an understanding of the probabilistic nature of the system, including which parameters behave randomly and how this behavior can be modeled. By using probability principles, reliability analysis approaches are able to express the random behavior as a single quantity which is compatible with existing deterministic design tools. In some cases this can be achieved by reformulating the problem to include the probability distributions, however this approach is problem specific. More generally, the reliability analysis can be performed using either simulation approaches such as Monte Carlo Simulation or approximation approaches such as the First-Order Reliability Method. It is important to note that the reliability analysis methods discussed in this section are only suitable for addressing aleatory uncertainties. Other methods must be used for analyzing epistemic uncertainties. 6

19 2.2.1 Key Aspects of Probability Theory Properties exhibiting random behavior are typically expressed using probability distributions. These distributions provide a mathematical relationship between a given a value of a random parameter and the frequency with which it is realized. Every continuous probability distribution can be expressed in one of two forms: a probability density function (PDF) or a cumulative density function (CDF). Probability density is the likelihood that a random parameter will fall between two values as the distance between the values approaches 0. When this is expressed as a continuous function, the result is the probability density function. Thus, given a probability density function for the random parameter, the probability of falling between two values and is the area under the curve between these values: (6) The cumulative density is the probability that a value less than will be realized. Thus, the cumulative density is the area under a continuous probability density function from negative infinity to : (7) If this is evaluated for every, the result is the cumulative density function. Thus, the probability that a random parameter falls between and is simply the difference in the cumulative density functions evaluated at and : (8) Among the most commonly used distributions is the normal or Gaussian distribution. This distribution provides a good representation of many systems since the summation of a large number of random variables tends towards a normal distribution regardless of the types of distributions, a property commonly referred to as the Central Limit Theorem. The normal distribution is symmetric about its mean and can be completely described by its first two 7

20 statistical moments: its mean,, and the square root of its variance, the standard deviation. Given these two properties, the probability density function for a normal distribution is given by: exp (9) A related property of normal distributions is that the linear sum of normally distributed parameters is exactly normally distributed. For example, given a function,,, :,,, (10) The mean of is given by Equation (11) and the standard deviation is given by Equation (12): (11) (12) A normal distribution with zero mean and unit standard deviation is commonly referred to as a standard normal distribution, with a probability density function denoted and a cumulative density function denoted Φ. When multiple random parameters are present, the likelihood of a specific combination being realized is given by a joint probability density function. This density function has the convenient property of being rotationally symmetric when all random parameters have uncorrelated standard normal distributions. That is, if a line is drawn from the origin in a space defined by orthogonal axes for each random parameter, the probability density along that line is only a function of the distance from the origin and will not vary regardless of how the line is rotated. Furthermore, if the distance along the line is parameterized into a single variable then the probability density of given by: 0 (13) 8

21 Figure 2: Rotationally symmetric joint probability density function Where is the probability density function of a standard normal distribution and is the number of random parameters. This implies that the probability density decreases as the distance from the origin increases. Thus, all points having a given probability density can be represented by a circle, sphere, or hypersphere in this space, depending on the number of random parameters. An example of this property for 2 random parameters is shown in Figure 2. Several reliability analysis approaches take advantage of this symmetry, as discussed in later sections Monte-Carlo Simulation Probability of failure is the idea that if a system is observed an infinite number of times, a specific percentage of these realizations would not satisfy a given performance metric, such as a beam deflecting in excess of a given amount in 0.05% of all realizations. While it is impossible to observe a system an infinite number of times, it is reasonable to assume that a good estimate of the probability of failure can be obtained after sufficiently many observations. This is the basis of Monte-Carlo simulation and its derivatives: the probability distributions of each random parameter are used to generate a finite number of random system configurations which are then analyzed. The probability of failure is then estimated by the fraction of realizations which failed. 9

22 (14) The advantage of this approach is that the estimate of the probability of failure will converge to the actual value as the number of samples approaches infinity. However, this is also the drawback of these methods: many samples may be required to obtain a good estimate, especially as the number of random parameters increases or if the probability of failure is low. Additionally, Monte-Carlo analysis is generally not repeatable since a new set of samples may produce a different probability of failure as a result of the random nature of the sampling. Altogether, these limitations suggest that while Monte-Carlo simulations are a powerful tool for a standalone analysis, they should not be used for applications which require repeated analyses or the sensitivity of the probability of failure to small changes in the system Mean Value First-Order Second Moment Method (MVFOSM) Simulation methods can be computationally expensive, particularly if the analysis of the system requires the solution of a finite element model. This has motivated the development of approximation methods which trade the accuracy of the probability estimate for improvements in the computation requirements. Among the simplest of these approaches is the mean value, firstorder second moment method. This approach approximates a function of random variables using the Taylor series expansion about the mean value solution of the function given in Equation (15). In this thesis, the convention that positive values of indicate that this constraint has been violated is used. 0 (15) Since the above equation has the same form as Equation (10), the linear approximation of the random function has a normal distribution. Therefore, the mean and standard deviation are given by Equations (11) and (12), where is the linear approximation evaluated at the mean value point and the are the gradient of evaluated at the mean value point. Using the linear expansion, the location of the failure point relative to the mean can then be estimated. It is common to express this as the reliability index,, which is defined as the number of standard deviations between the mean value of and 0. Assuming that the mean value point is feasible, the reliability index is given by Equation (16). 10

23 (16) The reliability index and the probability of failure are related since the probability distribution of the linear approximation is known to be a normal distribution. This relationship is depicted in Figure 3. Mathematically, this relationship is given by: 1 (17) 0 Figure 3: Relationship between the probability of failure and the MVFOSM reliability index It is important to note that the equivalent probability of failure will differ from the actual probability of failure as a result of the linearization of the system. Additionally, the actual probability distribution may not be exactly normal, especially if is nonlinear. More concerning however is the fact that the reliability index computed by MVFOSM is not invariant with respect to the problem formulation, as shown by Choi et. al example 4.2 [2]. That is, a different value of the reliability index may be computed for different algebraically equivalent expressions of. Because of this, the MVFOSM approach is typically not used by itself for reliability analysis. However, this approach is still commonly used to provide an initial estimate for more complex approaches, such as the First-Order Reliability Method. 11

24 2.2.4 First-Order Reliability Method (FORM) The First-Order Reliability Method originally proposed by Hasofer and Lind [3] is an improvement to the MVFOSM approach which makes the reliability index invariant at the cost of increased computational effort. As discussed in Section 2.2.1, an uncorrelated standard multivariate normal distribution is rotationally symmetric about the origin of the random parameter space. Furthermore, the probability density decreases predictably as the distance from the origin increases. This suggests that the distance from the origin in this space can be a good proxy for the random behavior of the system, similar to the MVFOSM reliability index Since many reliability problems do not have this joint probability density function, the first step of FORM is to map each random parameter onto a standard normal distribution. For a given point on a normal probability distribution with mean and standard deviation, the equivalent point on a standard normal distribution is given by: (18) Next, the failure surface of the constraint can be mapped into this new space. The point on the failure surface which is closest to the origin will therefore have the highest joint probability density. This point is commonly referred to as the Most Probable Point (MPP) and is designated. An equivalent to the MVFOSM reliability index can then be defined as the number of standard deviations between the origin and the MPP. Since the normalized space has unit standard deviation, this is equal to the distance between the origin and the MPP: (19) The relationship between the standard normalized space, failure surface, MPP, and reliability index for a two-dimensional problem is visualized in Figure 4. 12

25 0 Figure 4: Projection of the failure surface into a random space with two dimensions The fact that the MPP is a local minimum with respect to the distance from the origin implies that the constraint surface is orthogonal to the radial direction at the MPP. Therefore, linearization about the MPP will result in a line, plane, or hyperplane which is also orthogonal to the radial direction. By considering the rotational symmetry of the joint probability density function, a relationship between the probability of failure and the reliability index can then be developed. The probability of failure can be approximated by integrating the region beyond the linearized failure surface, shown in Figure 5. While this calculation may be difficult when the radial direction is at an angle relative to the axes of the normalized space, the rotational symmetry of the probability density implies that the solution will be identical to that of a hyperplane orthogonal to one of the axes and located at the same distance from the origin. This rotation is also depicted in Figure 5. Since the equivalent failure surface is parallel to every other random axis, the integration of the uncorrelated multivariate standard normal distribution with respect to each of these random parameters is exactly 1. All that remains is the integration along the perpendicular axis, which is simply the integration of the standard normal probability density function. Thus, the relationship between the FORM reliability index and the probability density is given by: 1Φ (20) 13

26 Linearized Failure Region 0 Equivalent Failure Region Figure 5: Rotation of the linearized failure surface The challenge is that the MPP is typically not known beforehand, however it can be found by minimizing the reliability index with constrained to lie on the failure surface: min s. t. 0 (21) Thus, the First-Order Reliability Method is able to obtain an estimate of the random behavior of the system by solving a deterministic optimization problem. A commonly used approach for solving this minimization problem is the HL Iteration procedure originally developed by Hasofer and Lind [3]. This method is similar to the MVFOSM approach, except sensitivity factors are used to update the linear expansion point until it converges to the MPP. However, it has been observed that this procedure is not robust and will sometimes fail to converge to an optimum [4]. In general, any optimization procedure can be used to solve the FORM problem. Another issue related to the optimization procedure is the choice of the initial point. While it is intuitive to use the mean value point as a starting point for the iteration procedure, the gradient at this point with respect to the standard normal random parameters, given by Equation (22), will be undefined since the reliability index of the mean value point is 0 by definition: 14

27 (22) To avoid this issue, it is common to use the MVFOSM approach discussed in Section to obtain an initial non-zero estimate of. Additionally, it is important to note that FORM is only applicable to problems which exclusively have normally distributed variables, however extensions exist for non-normal distributions including the Hasofer Lind Rackwitz Fiessler (HL- RF) method [5]. Lastly, the transformation given by Equation (18) indicates that there are actually two sets of variables used when solving the optimization problem: the optimization variables and the analysis variables. While it is possible to formulate the optimization process exclusively in terms of one or the other, this may not be convenient. If the transformation is used instead, the chain rule relationships given by Equations (23) and (24) can be used to obtain equivalent gradients: (23) (24) Overall, FORM is a commonly used procedure for reliability analysis since it captures the random behavior of the system using a deterministic process. This approach is compatible with existing analysis tools since the process of estimating the MPP replaces each probability distribution with a single value. Additionally, the computational costs of the procedure scale well with the number of random variables in comparison to Monte Carlo simulation. Lastly, if the failure surface is linear or nearly-linear, the estimate of the probability of failure will be accurate. 15

28 2.2.5 Performance Metric Approach (PMA) An alternative to the First-Order Reliability Method is the Performance Metric Approach [6] developed by Tu et al. Instead of finding the MPP, the PMA approach seeks to find the worst performing point for a specified reliability requirement,. This point can then be compared to a performance metric, such as an allowable stress or a maximum deflection. If the worst case point satisfies this metric, the system can be considered reliable. This approach is derived using the same principles as the First-Order Reliability Method, however the optimization problem is inverted, as shown in Equation (25). max s. t. 0 (25) Note that the maximization problem is a result of the sign convention used, where positive values of the constraint are considered to be infeasible. While this method is not the focus of this thesis, it has been regularly used to solve Reliability-Based Topology Optimization problems, as discussed in Section 2.6. This approach also has many of the same shortcomings as the First- Order Reliability Method. 2.3 Reliability-Based Design Optimization (RBDO) Double-Loop Approaches The First-Order Reliability Method can easily be implemented in a design optimization scheme, however the result is a double-loop optimization problem. That is, for every iteration of the design optimization routine, the FORM optimization problem needs to be solved while treating the current design variables as fixed parameters. A general RBDO problem can be expressed as follows: min s. t. 0 (26) 0 16

29 Where are the deterministic design variables, is the reliability constraint, and is a set of state equations such as the finite element equations. Additional deterministic constraints can be introduced to the problem as needed. The reliability constraint requires the computation of the reliability index each iteration of the design problem by solving the FORM problem: min s. t., 0 (27), 0 Where, is the non-deterministic limit state and, is a set of state equations which may need to be satisfied while evaluating the constraint. A typical scheme is depicted below in Figure 6. While the Performance Metric Approach discussed in Section is not the focus of this research, the double-loop structure of the problem would be identical. The design problem is commonly solved using a gradient-based optimizer. While these approaches are efficient, they typically require good estimates of the sensitivities of the constraints to quickly converge. Since the reliability constraint is itself an optimization problem, this indicates that the sensitivities of the solution of the optimization problem are required. For FORM, these sensitivities are given by Equation (28). (28) Depending on the sensitivity analysis method used, additional solutions of the FORM problem may be required each iteration. This highlights the primary drawback of the double loop approach: while it is simple to implement, the repeated solutions of the nested optimization problem are computationally expensive. 17

30 RBDO Initialize RBDO Problem Compute, FORM Initialize FORM Problem Evaluate Reliability Constraint using FORM Sensitivity Analysis of FORM Update Design Variables Compute, and Update Random Variables Convergence Check Return Convergence Check Stop Figure 6: Typical RBDO procedure 18

31 2.3.2 Single-Loop and Decoupling Approaches A number of alternative approaches to the double-loop problem have been proposed with the goal of reducing the computational effort required. While these methods are not the focus of this research, they will be briefly mentioned below since several have been used to solve Reliability-Based Topology Optimization problems. As the name implies, single-loop approaches modify the reliability analysis problem so that it is coupled with the design optimization problem in a single loop. The design point and the MPP then converge simultaneously. Typically, these approaches are derived by considering the KKT conditions of the reliability analysis problems. Examples of these approaches include the Single Loop Single Vector (SLSV) algorithm developed by Chen et al. [7], the single loop methods for reliability index analysis and PMA developed by Silva et al. [8] and the Single Loop Approach (SLA) developed by Liang et al. [9]. Decoupling approaches extend this idea further by completely separating the reliability analysis and the deterministic design. For example, the Sequential Optimization and Reliability Assessment approach [10] expresses the RBDO problem as a series of alternating design optimization and reliability analysis steps. A shift parameter is computed after each reliability analysis in order to adjust the design optimization until the design converges to a point which satisfies the reliability constraint. Another approach is the Sequential Approximate Programming approach developed by Cheng et al. [11] This approach decomposes the optimization problem into a sequence of sub-problems with approximate objectives and constraints which are valid locally around the design point. A set of recurrence formulas in terms of the design point are used to compute the reliability index and MPP simultaneously, removing the need for a nested reliability analysis problem. A comparison study of these methods was performed by Aoues and Chateauneuf [12]. This investigation concluded that the double loop methods are simplest to implement, however the decoupling approaches are generally more efficient and accurate. Additionally, the SLA approach is promising; having simplicity, efficiency, accuracy, and robustness. 19

32 2.4 Sensitivity Analysis Many optimization schemes are gradient-based, using derivatives or sensitivities of the objective and constraint functions to iterate towards a local minimum. This includes schemes such as sequential quadratic programming (SQP), MATLAB s interior point algorithm [13], and the Method of Moving Asymptotes [14]. It may be possible to derive the sensitivities for simple functions, however in many engineering analyses the objective or constraint functions are sufficiently complex that it is more practical to use a numeric scheme to compute the sensitivities instead. Several of these schemes are detailed in the following subsections Finite Difference Method The simplest approximation for a derivative is the finite difference approach. This approach comes from the definition of a derivative, which looks at the change in a function over an infinitesimally small step. Numeric procedures require a finite step size, so the derivative is instead approximated by the change in a function over a small distance, : (29) The finite difference method is simple to implement but has several drawbacks. First, error is introduced since the derivative is now the change over a non-infinitesimal distance. This error typically increases with the step size. While this would suggest that the smallest step size possible should be used, the precision limitations of computers can cause subtractive cancellation errors for extremely small distances. Thus, error is minimized at some middle ground which is problem dependent. A step size study is typically required to identify an appropriate value. The other disadvantage of this approach is that two function evaluations are required. For some functions the computation cost of the additional evaluation is trivial, however many functions require the evaluation of a large model so the computation cost of this method may be impractical. 20

33 2.4.2 Complex Step Method The complex step approach is similar to the finite difference approach in that the derivative is approximated by using information from a nearby point. However, by taking a small step in the complex plane, the subtraction errors are avoided. Thus, step sizes as low as permitted by the precision of the computer can be used, giving a more accurate approximation of the derivative. (30) Similar to the finite difference approach, this requires the evaluation of the function at a nearby point. This approach may also be more challenging to implement since the analysis must be compatible with complex numbers Direct and Adjoint Methods In many cases, the function contains information from the solution of a large system of linear equations such as the displacements found by a finite element analysis. A derivative of this constraint therefore requires a derivative of the entire system of equations. Take for example the equilibrium equations for a finite element analysis: (31) Where is the stiffness matrix, is the loads vector, and is the vector of displacements, including a displacement of interest. A constraint on a given displacement may then be written: (32) The derivative with respect to a given design variable is therefore: (33) 21

34 Two approaches exist for efficiently computing these derivatives: the direct method and the adjoint method. In the direct method, the derivative of the displacement vector with respect to the design variables is found by taking the derivative of each side of the system, then solving for the vector of sensitivities: (34) While this provides the sensitivity information required, the calculations have to be repeated for each design variable. The sensitivities of many other displacements are also computed even if they are not needed. Alternatively, the adjoint method can be used. First, the displacement is expressed as: (35) Where is a vector which is zero everywhere except for the th term. The equilibrium equations can then be adjoined by multiplying them by an unknown vector which reduces the equations to a scalar: (36) After taking the derivative, terms which multiply can be grouped, as shown below: (37) Through careful choice of, can be multiplied by 0, effectively removing it from the calculation. The resulting expression for the derivative of the constraint is: (38) Comparison of equations (34) and (38) shows that the expressions for the sensitivities are identical even though the approaches differed. Principles from each approach will be used later in this thesis. 22

35 2.4.4 First-Order Necessary Conditions In addition to providing information which is useful for efficiently iterating towards an optimum, information about the sensitivities can also be used to determine if a local minimum or maximum has been found. If the optimization problem is constrained by a set of equality statements, then the Lagrange Multiplier Theorem can be used to assess optimality. First, the Lagrangian function is defined:, (39) Where is the objective function, the are equality constraints, and the are scalar multipliers for each constraint, commonly referred to as Lagrange multipliers. The Lagrange Multiplier Theorem states that at a local minimum, the derivatives of Lagrangian with respect to each design variable and each Lagrange multiplier are zero:, 0, 0 (40) For simple problems, these conditions can be used to directly solve for the optimum point. This is rarely done in practice however due to the complexity of many engineering problems. Instead, these conditions are commonly used to assess the convergence of an optimization scheme and as the basis for penalty methods [15]. The Lagrange Multiplier Theorem is only applicable for problems with equality constraints, however the Karush-Kuhn-Tucker (KKT) conditions extend the principles of this theorem to inequality constraints. For a general optimization problem with objective function, inequality constraints, and equality constraints, the Lagrangian function is given by:,, (41) The KKT conditions can then be expressed in terms of the Lagrangian function. There are a total of four conditions. First, must be feasible, indicating that it satisfies all constraints: 23

36 ,, 0,, 0 (42) The second set of conditions is the first-order necessary conditions:,, 0 (43) The third set of conditions is commonly referred to as the switching conditions. While equality constraints must be active, requiring to lie on their failure surfaces, inequality constraints do not have to be active to be satisfied. This condition therefore adds slack, allowing each constraint to either be active, having a value of 0, or inactive, having a zero multiplier. 0 (44) The last set of conditions is the non-negativity conditions. These conditions limit the inequality constraint multipliers to have positive values. This requirement stems from the sign convention that positive constraint values are infeasible. 0 (45) It is important to note that both the Lagrange Multiplier Theorem and the KKT conditions are only applicable if the problem is under-constrained and the constraint gradients are independent Sensitivity Analysis for RBDO Applications Sensitivity analysis specific to RBDO applications has been considered by several authors. This includes an early investigation by Hohenbichler and Rackwitz [16] which showed that the derivative of the reliability index with respect to distribution parameters is fundamentally similar to the derivative with respect to design variables. This allows them to be grouped into a common vector and treated identically. A sensitivity expression is then derived using asymptotic approximations:, 24, (46)

37 A similar expression can also be derived using a perturbation procedure [11]. A small perturbation in can be expressed as the result of a small change in the MPP times the sensitivity of at the MPP: (47) Additionally, since the MPP is a local minimum, the KKT conditions given by Equation (42) will be satisfied, giving the relationship between the objective and constraint gradients:, (48) The non-deterministic constraint can then be perturbed by a small change in the design variables. If it is assumed that a new MPP will always be found then the equality constraint will still be satisfied:,,, 0 (49) After combining the above equations and solving for the Lagrange Multiplier, the following expression is then obtained for the sensitivities of the reliability index to the design variables:,, (50) A more general expression was derived by Kwak and Lee[17] using a perturbation approach. This approach states that for any nested state equations and nested inequalities, the sensitivity of the reliability index is given by: (51) Lastly, a general approach for double-loop optimization problems with inequality constraints is suggested by Haftka et al. [15]. Derivatives of the KKT conditions given by Equations (42) and (43) for a nested problem in terms of variables can be taken with respect to a given outer variable : 25

38 , 0, 0 (52) Assuming that the active constraints do not change, this expands into a system of equations in terms of the sensitivities of the nested solution to the design variables : 0 0 (53) Where is the Hessian of the objective with respect to, is a summation of the Hessians of the constraints with respect to multiplied by their Lagrange multipliers, and is a matrix of the derivatives of the constraints with respect to : (54) It is important to note that this approach has extra equations since the Lagrange multipliers will also be functions of the outer variables. Depending on the application, the sensitivities of the Lagrange multipliers may or may not be needed so the extra computational effort required is potentially a disadvantage. It is also important to note that second order information is required to use this approach. 26

39 2.5 Topology Optimization Optimization problems within the field of structural optimization can broadly be categorized into three classes: sizing, shape, and topology optimization. Generally speaking, the difference between these approaches is the number of assumptions made. In sizing optimization, the geometry of the problem is well defined with the objective being to identify the best combination of dimensions. A typical sizing problem would be to determine the dimensions of an I-beam or the cross sectional areas of the members of a truss structure. In shape optimization, the general structural configuration is known however boundaries are allowed to move. For example, the coordinates of different joints in a truss structure might be treated as design variables. Lastly, in topology optimization the geometry is generally unknown, with only a set of boundary conditions being provided. The objective is then to identify the most efficient distribution of material. A common structural topology optimization problem is to minimize the compliance of the structure given a finite amount of material. This requires the solution of a finite element analysis problem each iteration: (55) Where is the global stiffness matrix, is the unknown global displacements of the nodes, and are the global loads applied at the nodes. The compliance is then defined as: (56) Compliance minimization is commonly used as the objective of a topology optimization analysis since it serves as a simple and efficient approximation for the behavior of the structure. Compliance is equivalent to the average strain energy density in the structure, so a lower value implies that the structure has not deformed as much, and is therefore stiffer. Derivatives of the compliance are also relatively simple to compute by considering the direct method described in Section An example of the solution to a typical compliance minimization problem is depicted in Figure 7. In this problem, commonly referred to as a Mitchell beam, a vertical load is applied at 27

40 Figure 7: Topology optimization solution for a Mitchell Beam the center of a simply supported beam. It is common to model the Mitchell beam using a symmetry boundary condition to reduce the number of design variables needed. In Figure 7, the vertical force then is applied to the upper left corner while the structure is fixed at the lower right corner. The compliance of the structure is then minimized with the constraint that only 30% of the volume can be used. The result is a stiff, efficient structure with respect to the given loading condition. In practice, this topology would then be refined through further iterations of shape and sizing optimization Solid Isotropic Material with Penalization (SIMP) One of the most intuitive approaches to topology optimization would be to simply turn on and off different regions until an optimum solution is found. This idea forms the basis for the Solid Isotropic Material with Penalization (SIMP) approach to the topology optimization of structures[18]. In this approach, the continuum is discretized into a mesh of many small blocks of material, each of which has a pseudo-density which governs its behavior. This mesh is also used for finite element analysis so that these densities can be coupled with structural properties. Typically, the pseudo-density acts as a multiplier for the elastic modulus of the material. Thus, elements with zero density do not contribute to the stiffness of the structure, effectively acting like voids. Ideally, each density would be permitted to have an integer value of either 0 or 1, corresponding to an element which contributes either no stiffness or its maximum stiffness. In 28

41 practice, this approach has several limitations. First, singularities can arise in the global stiffness matrix when zero stiffness is prescribed for an element, causing the finite element analysis to fail. To address this, a near-zero value is typically used for the lower bounds of the density instead. Second, discrete optimization is inefficient, especially when there are many design variables. A more efficient approach would be to use a gradient based-optimizer, however this requires continuous design variables. As a result, intermediate values of the density would be permissible. To avoid intermediate values, the SIMP approach replaces the elastic modulus of each element with a power law, given in Equation (57): (57) Where is a nominal value of the elastic modulus, is the pseudo-density of element, and is the penalization exponent. The penalization factor drives the optimizer away from intermediate densities by making them less efficient than full values, effectively emulating the discrete behavior with a continuous function. Note that this also requires the assumption that the material is isotropic. Since the elastic modulus ultimately is used in a finite element analysis, it is common practice to factor the density out of the element stiffness matrix: (58) This is particularly beneficial when a uniform mesh is used since every element will share the same unscaled stiffness matrix. Additionally, it is important to note that this approach does not make any assumptions about the type of finite element used. Thus, an identical approach can be used for truss problems, plate problems, 3D problems, and problems with higher-order elements. 29

42 2.5.2 Level-Set Methods An alternative class of approaches to the topology optimization problem is the level-set methods [19]. These methods are not the focus of this research, however RBTO using these approaches has been explored in literature. In general, level-set methods differ from SIMP by replacing the densities of elements with a global scalar function. The level-set curves of this function then define the boundaries of the structure. To iterate the design, a Hamilton-Jacobi equation is developed in terms of the scalar function with respect to space and a fictitious time : φ, 0 (59) This can then be expressed in terms of a speed vector of the level set surface, Γ, : φγ, (60) The fictitious time is equivalent to the iteration of the design. Additionally, it has been shown that the iteration of this function is equivalent to the descent series of an optimization problem [19]. To simplify the problem, can be discretized across a mesh in a manner similar to a finite element problem, which also allows for finite element solutions to be easily incorporated into objectives or constraints. Overall, the level set method has the advantage that the boundaries of the structure can theoretically be defined by a continuous function instead of a set of discrete elements Filtering A common issue encountered in topology optimization using the SIMP approach is checkerboarding, a behavior where the optimizer converges towards a solution where elements are only connected diagonally. The result is a grid of alternating zero and full density elements. An example of this behavior for a cantilever beam problem is depicted below in Figure 8. Checkerboarding is not a physically realistic solution however, but is instead an artifact of the finite element discretization since the corner nodes allow for the transfer of stiffness information between diagonal elements, even though the elements are otherwise discontinuous. Thus, 30

43 Figure 8: Cantilever beam problem exhibiting checkerboarding adjacent elements are not required to efficiently fill in a region except when very high stiffness is needed. Two of the most common approaches for preventing checkerboarding are density filtering and sensitivity filtering. Density filtering expresses the optimization problem as two sets of variables: the physical pseudo-densities and the optimization variables. The pseudodensities are the values which are actually used in the finite element analysis, objective function, and constraint while the optimization variables are only used by the optimizer. These two sets of variables are related by expressing each pseudo-density as a weighted sum of the optimization variables [20]: (61) A common set of weights are the conic weights. These weights vary linearly with the distance between the centers of elements and when this distance, is within a specified radius. The weights are zero outside of this radius. Additionally, the weights are normalized so that the sum of all weights for a given density is 1. This prevents the physical densities from exceeding their upper bounds., (62), The weights can be computed when initializing the optimization process since they only depend on the geometry and finite element mesh. The weights can then be conveniently grouped into a weighting matrix which relates the pseudo-density vector to the optimization variable vector. 31

44 (63) (64) One convenient property of this approach is that since this is a linear transformation, the chain rule can be used to relate the sensitivities with respect to the pseudo-densities to those with respect to the optimization variables. This has the advantage of simplifying the sensitivity analysis and preserving the KKT conditions. The sensitivity transformation is given below in Equation (65). (65) Overall, this approach avoids the checkerboarding behavior by specifying a maximum change in the density between elements which is a function of the filter radius. This also has the benefit of setting a minimum size for design features since SIMP penalizes intermediate values of the elastic modulus. This drives the optimizer towards features which are sufficiently large enough that a full value of the stiffness can be obtained for at least some elements. Another commonly used approach is sensitivity filtering [21]. In this approach, both the analysis and the optimization procedure are expressed in terms of the pseudo-densities. The sensitivities of the objective function and/or the constraints however are then modified using Equation (66). Similar to the linear density filter, only elements within a radius are allowed to influence the sensitivity of. If an element is outside of this radius, then a value of 0 should be used instead:,, (66) This approach is simpler to implement, however the modified sensitivities cannot be used to assess the KKT conditions. 32

45 2.6 Reliability-Based Topology Optimization (RBTO) Reliability-based topology optimization has been explored by multiple authors since A brief summary of these investigations is presented in this section, with a focus on the sensitivity analysis procedures used. Among the earliest attempts to incorporate topology optimization into a RBDO routine was the effort by Kharmanda et al. [22]. This approach modified the 99 line topology optimization code developed by Sigmund [23] to include reliability analysis for a random load. Rather than use a double-loop implementation, it was proposed that the FORM problem is solved prior to each iteration in order to determine the MPP of the force. This force is then used in 99 line topology optimization routine to determine the next design. Sensitivity analysis is performed using finite difference derivatives. However, it has been observed that the implementation of this approach was flawed and resulted in a reliability problem with no physical significance [24]. Another early attempt at RBTO was made by Maute and Frangopol [25]. In this research, the deforming structure of a micro-elector-mechanical system is optimized using SIMP with PMA for the reliability analysis. The actuation force and spring stiffness are treated as random parameters. Sensitivity analysis is performed by computing the total derivative of the performance measure with respect to the design variables:,,, (67) To simplify this calculation, the first order necessary conditions of the PMA problem are then considered:, 0 (68) The authors argue that at the MPP, the reliability index is constant so the performance measure does not depend on. Therefore, the sensitivity is simply:,, (69) 33

46 Another effort at RBTO considered the minimization of compliance with a spatial varying and correlated elastic modulus represented by a marginally lognormal distribution [26]. It was observed that the tail of the compliance distribution approximates that of a lognormal distribution. Using the relationship between the reliability index and the probability of failure given by Equation (20) an equivalent relationship between the reliability index and statistical moments which are fitted to the tail is then derived. These statistical moments are then expressed in terms of the finite element analysis, allowing for analytic sensitivities of the reliability index with respect to the pseudo-densities to be derived. While this sensitivity analysis approach is efficient, it is very problem specific. A benchmark problem for RBTO has been proposed and evaluated by Rozvany and Maute [1] using SIMP with PMA. This benchmark problem consists of a rectangular region which is clamped at the bottom, depicted below in Figure 9. A deterministic vertical force and a zero mean, nonzero standard deviation force is applied at the center of the upper surface. The objective is to minimize the volume given a constraint on the reliability of the structure. The analytic solution for this structure is shown to be a 2-bar truss, with the angle between the bars related to the standard deviation of the horizontal force and the target reliability requirement. The RBTO solution will not exactly match this angle however as a result of the discretization of the region into finite elements. Sensitivity analysis was performed using the procedure from Maute and Frangopol [25], given by Equation (69). 0, Figure 9: RBTO benchmark problem from Rozvany and Maute [1] 34

47 One of the issues related to RBDO in general is the difference between the reliability against a single type of failure and the reliability against failure when multiple modes are possible. This concept was explored in the context of RBTO by Silva et al. [8] The authors introduce the concepts of Component Reliability-Based Topology Optimization (CRBTO) and System Reliability Based Topology Optimization (SRBTO), with the difference being that CRBTO expresses each failure criteria as a separate constraint while SRBTO uses a single constraint to capture all possible failure criteria. Using a single loop method, the authors were able to solve a number of RBTO problems using each approach, demonstrating that the reliability of the system is lower when each constraint is considered separately. For the sensitivity analysis, the derivative of the reliability index with respect to a given design variable is approximated by:, (70) RBTO has also been applied to geometrically nonlinear structures including a plate with random elastic modulus and loads [27]. In this investigation, a double loop problem consisting of an outer sequential linear programming (SLP) SIMP problem was solved with a nested PMA analysis. Since PMA was used, the sensitivity of the nested problem only requires the derivatives of the performance metric with respect to each random variable. This is similar to the problem solved by Maute and Frangopol [25] summarized above. In this case, the performance metric was a displacement constraint, so the adjoint method (Section ) was used to derive sensitivities of the displacement with respect to each design variable and the random loads. Interestingly, the authors also performed a study comparing the efficiency of this sensitivity analysis approach to the finite difference method (Section 2.4.1). This study found that the finite difference method required 6698s of computation time to compute the sensitivity of 100 design variables, which was approximately 1000 times the computational effort of the adjoint approach for a problem. This highlights the inefficiency of the finite difference approach for RBTO problems, especially since 100 design variables is a very small topology optimization problem. Lastly, a study was performed which compared RBTO to deterministic topology optimization with and without a factor of safety. Compared to the deterministic solution with no factor of safety, RBTO with a target reliability index of 3 reduced the probability of failure from 50% to 0.135% at the cost of 9.2% higher more material. A safety factor of 1.5 successfully reduced the 35

48 probability of failure to %, however this required 13.9% more material than the deterministic solution. This showed that both approaches increase the safety of the design, however RBTO has the advantage of being able to target a specific probability of failure. One of the challenges related to FORM is the inaccuracy due to the curvature of the constraint function. An investigation was performed into improving the accuracy of this approach using a procedure called the segmental multi-point linearization (SML) approach [28]. This approach replaces the single hyperplane approximation used by FORM with several hyperplanes fitted to the response surface in order to better capture the effects of changes in the curvature. A more accurate sensitivity expression was also derived for a general nondeterministic constraint using a perturbation procedure similar to Equation (49). The result is an expression similar to Equation (50), however a surface integral is required since no assumption is made about the linearity of the failure surface: (71) This generally cannot be evaluated exactly, however it can be approximated as a summation of weighted gradients of the constraint evaluated at different fitting points, similar to the SML procedure: (72) The weights are found by integrating Equation (71) across each hyperplane. Several ground structures problems are then solved using this approach. An alternative sensitivity analysis procedure was used by Kim et al. to solve several RBTO problems using FORM and PMA [29]. For the first-order reliability method, the sensitivities are approximated by taking the derivative of the MVFOSM approximation for the reliability index given by Equation (16): (73) 36

49 The sensitivity analysis used for PMA is simply the derivative of the constraint function with respect to the design variable, evaluated at the worst case point:, (74) RBTO was then performed for a statically loaded cantilever plate with a random load, elastic modulus and thickness as well as for a beam subject to an eigenvalue constraint. Several authors have also explored using decoupled algorithms for RBTO. This includes applying RBTO to large 3D problems including wing ribs and a tail cone structure using the SORA approach [30]. Another approach combined the Hybrid Cellular Automaton (HCA) algorithm with PMA [31]. HCA is an alternative topology optimization algorithm which uses both the density and strain energy to define the state of each element. This leads to an objective function of simultaneously minimizing these two parameters while constrained. Since these RBTO approaches used decoupled algorithms, the sensitivities of the MPP with respect to the design variables were not required. 37

50 3. Sensitivity of Nested Optima 3.1 General Derivation A general double loop optimization problem can be expressed in terms of two sets of variables: the outer design optimization variables,,,, and the nested design optimization variables,,,,. Since the solution of the nested optimization problem is found using an algorithm, its relationship with is assumed to be implicit. Additionally, is potentially a function of all outer design variables:,,,,,,,,,,,, (75) In this formulation, the outer loop is only a function of the outer design variables and the optimum of the nested problem: min s. t., 0 (76) 0 0 Where is the objective of the outer problem,, is a constraint which depends on the solution of the nested problem, and and are a set of deterministic inequality and equality constraints which need to be satisfied in the outer problem. Note that,, is expressed as an inequality constraint, however this approach is also applicable to equality constraints. The nested optimization problem is given by: 38

51 min,,, s. t.,,, 0, 1,2,, (77) where, is the solution to,, 0 Where,,, is the nested objective function,,,, is the nested equality constraint(s), and,,, is a set of state equations which are solved to obtain the state variables,. The sensitivities of the nested optima with respect to the outer variables may be required for some applications, including computing analytic sensitivities of,. While these sensitivities could be obtained using a finite difference approach, it would require additional evaluations of the nested problem. Alternatively, analytic expressions for these sensitivities can be derived using the Lagrange Multiplier Theorem. As discussed in Section 2.4.4, at the optimum of the nested problem the gradients of the Lagrangian function will be zero with respect to all and all Lagrange multipliers:,,,, 0,,,, 0 (78) For a given, the above equations can be solved for and. If it can be assumed that a local minimum of the nested problem can be found for any point in the neighborhood of, then these equations will be satisfied everywhere in the neighborhood of. Therefore, the derivative of these equations with respect to each design variable can be used to compute the sensitivities of and with respect to. For a given, this results in a new system of equations:,,,, 0,,,, 0 (79) The definition of the Lagrangian function from Equation (39) can then be substituted in. Since the Lagrange multipliers are scalar multipliers for the constraints, the gradient of the Lagrangian 39

52 with respect to the Lagrange multiplier is simply the constraint functions. Therefore, the system of equations can be broken into two distinct subsets:,,,,,,,,, 0,,, 0 (80) Where is a column vector of the derivatives of the nested objective function, is a matrix of constraint derivatives, is a column vector of the Lagrange multipliers, is a column vector of the derivatives of the Lagrange multipliers with respect to, and is a column vector of the constraints. It is important to note that the constraints,,, may be functions of the state variables,, implying that any derivatives of with respect to or must consider the derivatives of, with respect to or. For example, expanding the,,, term from Equation (80) to explicitly include these derivatives yields the following expression:,,, (81) Taking the derivative again with respect to will then require further expansion. Since, is the solution to the state equations,,,, many of these derivatives can be efficiently computed using the direct and adjoint methods discussed in Section For simplicity, it is assumed for the remainder of this section that any derivatives of will implicitly include these calculations. Each of the two subsets of equations can then be considered separately. The first set, containing and, can be expanded using a total derivative since is implicitly a function of. In general, the total derivative expansion for a generic function, is given by:, (82) 40

53 After expanding the first subset of equations with the total derivative and simplifying, the resulting matrix equations are: 0 (83) Where is the Hessian matrix of with respect to, is the Hessian matrix of constraint with respect to, is the column vector of the sensitivities of each component of with respect to, is a column vector of mixed derivatives, and which is equal to the derivatives of with respect to. This subset of equations contains the desired sensitivities, however it does not form a complete set of equations since the derivatives of the Lagrange multipliers with respect to are also unknown. In total, there are equations but unknowns in this subset. A complete set of equations can be formed by considering the second subset of equations. Similar to before, the derivatives of the constraint equations need to be expanded using implicit differentiation. The result is given below in Equation (84). 0 (84) Where is a column vector of the derivatives of with respect to. Similar to the first subset, there are more unknowns than equations. Specifically, this subset has equations but unknowns. When combined, Equations (83) and (84) give a complete system of equations in terms of the unknown sensitivities of the MPP and Lagrange multipliers to a given outer optimization problem variable. Conveniently, this system is entirely linear with respect to these sensitivities. The system is also typically small, having a size equal to the sum of the number of random parameters and equality constraints. The system can then be expressed in the form so that efficient linear solvers can be used, as shown below: 41

54 0 (85) Where all information is evaluated at the current and nested optima. Another useful property of this system is that the left side does not contain any derivatives with respect to. This implies that the left side only needs to be computed once for a given iteration of the outer problem. All sensitivities can then be obtained using a procedure analogous to the solution of a finite element system with multiple load cases. Therefore, the computational cost of this approach will scale linearly with respect to the number of outer optimization variables. This approach does have two drawbacks however. First, the sensitivities of the Lagrange multiplier to must be computed as a byproduct of this analysis even though this information may not be needed. This adds an extra equation to the system for each Lagrange multiplier, increasing the computation costs. Additionally, the second derivatives of the nested objective function and constraints are required. It may be impractical or impossible to obtain these for some problems, though it should be noted that approximations may be available depending on the optimization scheme used to solve the nested problem. For example, the Sequential Quadratic Programming scheme requires the Hessian of the Lagrangian function for the nested problem, which can either be supplied or calculated during the optimization procedure using an update process [15]. In that case, the only additional information which needs to be computed is the mixed second derivatives of the objective and constraint functions. Overall, the result of this derivation is a procedure which can be used to obtain analytic sensitivities of the nested optima with respect to outer design variables if the nested problem contains only equality constraints. It should be noted that the result of this derivation is identical to the derivation from Haftka et al. for inequality constraints [15], discussed in Section This is because the approach for inequality constraints requires the assumption that active constraints remain active and inactive constraints remain inactive. Thus, the active constraints behave like equality constraints for the sensitivity analysis only. 42

55 3.2 Application to the First-Order Reliability Method As discussed in Section 2.3.1, a reliability-based design optimization routine which uses FORM has a double-loop structure, with the FORM optimization problem nested within a design optimization problem. The FORM problem, given by Equation (21), is a relatively straightforward optimization problem, having only a single equality constraint. Additionally, the sensitivity of the reliability index to each design variable may be expressed as the sum of the MPP times its sensitivities, as shown by Equation (28). Therefore, analytic sensitivities of the reliability index to the design variables can be obtained if the sensitivities of the MPP,, are evaluated using the procedure described in Section 3.1. Two simplifications can be made to the system of equations for the sensitivities given by Equation (85). First, FORM only has a single constraint so all constraint derivative matrices reduce to column vectors. Second, is not explicitly a function of any design variables, so the mixed derivatives of are all 0. The simplified system of equations is given in Equation (86): 0 (86) Where all first and second derivatives are evaluated at the MPP and current design. This expression can be further simplified by considering the formulation of, given by Equation (19). Since can easily be expressed for any number of random parameters, the Hessian of can be defined for any problem. Specifically, for a problem with random parameters the second derivatives with respect to each are given by: (87) 43

56 Additionally, the Lagrange multiplier can be computed using the necessary conditions from Equation (40). Since an optimum has already been found, the only unknown in this system is so only one equation is required. For example, can be expressed in terms of the derivatives of the Lagrangian with respect to : (88) Therefore, the only problem-specific information required is the derivatives and second derivatives of the constraint. For convenience, these derivatives can be computed in terms of the random parameters then converted to the equivalent standard normal form using the transformation given by Equation (23). It is also important to note that the sensitivities are typically already computed while solving the FORM problem. A general procedure for obtaining analytic sensitivities of the reliability index each iteration of the design problem can then be constructed. This procedure consists of the following steps: 1. Solve the FORM problem to obtain,, and the sensitivities 2. Compute the Lagrange multiplier for the equality constraint using Equation (88) 3. Evaluate the Hessians of and at the MPP, then formulate the left hand side of Equation (86) 4. For each compute and, then formulate the right side of Equation (86) 5. Use a linear solver to obtain the sensitivities of the MPP to 6. Evaluate the sensitivity of to using Equation (28) 7. Repeat steps 4-6 for every This procedure is validated in Section 4.1 using a 3-bar truss problem. It is then applied to several reliability-based topology optimization problems in the following sections. 44

57 4. Results and Discussion 4.1 Overview of the Analyses Performed Reliability-based design optimization generates designs which are not only optimal, but also reliable, by introducing constraints on the non-deterministic behavior of the system. This comes at the cost of increased computational effort, particularly when a double loop approach is used. This cost can be minimized however if analytic sensitivities of the solution of the nested problem are available. A procedure for obtaining these was derived in Section 3.1 then extended to RBDO using the First-Order Reliability Method in Section 3.2. The remainder of this thesis is devoted to exploring applications for this procedure, including extensions to Reliability-Based Topology Optimization. First, a probabilistic variant of the classic 3-bar truss problem from Choi et al. [2] is used to validate the method. Then, three reliability-based topology optimization problems are considered. This includes the benchmark problem developed by Rozvany and Maute [1] which constrains the probabilistic compliance of a structure. Two new problems are then formulated and solved: a probabilistic compliance problem with multiple random loads and a deflection-constrained cantilever structure with a random load. A secondary objective of this research is to further explore the benefits of RBDO and RBTO. To achieve this, the results of these approaches are compared to deterministic equivalents when available. 45

58 4.2 RBDO of a 3-Bar Truss Structure Problem Statement The first problem considered is the probabilistic 3-bar truss from Choi et al. Section [2]. The truss consists of 3 bar elements with unknown cross-sectional areas, depicted below in Figure 10. The three bars are joined together at a single pin joint, with the other end of each bar pinned at the locations shown. For simplicity, bars 1 and 3 are assumed to have the same unknown cross-sectional area. The structure is loaded with a force applied at an angle at the pinned joint Figure 10: Probabilistic 3-bar truss problem The objective of this optimization is to minimize the volume of the structure given constraints on the displacement of point and the stress in each member. These stresses are given by Equations (89), (90), and (91) while the horizontal displacement of the joint is given by Equation (92) and the vertical displacement is given by Equation (93): (89) 46

59 (90) (91) (92) (93) Each stress and displacement is required to be less than a critical value, designated as,,, and. The deterministic optimization problem can then be formulated by applying a factor of safety to each stress and displacement: min 2 2 Subject to:, 0, 0 (94), In the probabilistic variant of the problem, the load, its direction, and the elastic modulus of all bars is treated as normally distributed random parameters. As shown by Equations (89) through (93), all stresses and displacements are a function of at least 2 of these parameters. Therefore, each constraint is replaced by a reliability constraint in an equivalent RBDO formulation, given below: min 2 2 s. t. 0, 1:5 (95) 47

60 Each reliability index is computed by solving a separate nested FORM problem: min s. t., 0 (96) Where the, in Equation (96) are given by the constraints in the deterministic problem with a factor of safety of 1. In the following subsections, the sensitivity analysis procedure developed in Section 3.2 is demonstrated and validated by considering the stress in the first bar for a single iteration of the design. Then, several deterministic optimization and RBDO solutions are obtained and compared for this problem Sensitivity Analysis Demonstration and Validation The procedure for computing the derivatives of the MPP with respect to each design variable, described in Section 3.2, can be validated by considering the reliability analysis of the stress in the first bar. This stress, given by Equation (89), is a function of two normallydistributed random parameters: the load and its direction, as well as two design variables: and. The values of the random and deterministic parameters used in the sensitivity validation are summarized below in Table 1. Table 1: Summary of parameters for the sensitivity validation study Parameter Mean Value Standard Deviation 1 N/A 2 N/A , 10 N/A The first step of the sensitivity analysis is to solve the FORM problem for the current design. This was performed using the Hasofer-Lind iteration procedure [3] with a tolerance of 10 on the convergence of the reliability index and the MPP. The results of this analysis are listed in Table 2, including the Lagrange multiplier of the equality constraint. 48

61 Table 2: FORM solution for the reliability of the stress constraint for bar 1 Parameter FORM Solution The next step of the analysis is to obtain all required first- and second-order derivatives. The equation for the stress is sufficiently simple that analytic expressions for these derivatives in terms of the standard normal random parameters can be obtained. For example, the derivatives of the stress constraint with respect to the design variables are: (97) (98) Similarly, the derivatives with respect to the standard normal random parameters are: cos sin (99) 1 cos sin 1 (100) The second derivatives can then be computed. For simplicity, these derivatives are expressed in the vector and matrix forms required by Equation (86). It should be noted that the Hessian of the reliability index is not given below since it was previously derived in Equation (87). 49

62 0 (101) (102) (103) Next, the systems of linear equations in terms of the unknown sensitivities of the MPP from Equation (86) can be formed. Since there are two design variables, there are two systems which need to be solved. Each system has a total of 3 equations since there are two random parameters and one equality constraint. After substituting in the data from Table 1 and the results of the FORM analysis from Table 2, these systems are: (104) (105) Finally, these linear systems can be solved to obtain the sensitivities of the MPP with respect to the design variables. These results are summarized below in Table 3. For comparison, sensitivities obtained using the finite difference method with a step size of 10 and the complex step method with a step size of 10 are also included. These show very good agreement, thereby validating the sensitivities obtained using the Lagrange Multiplier Theorem approach. 50

63 Table 3: Comparison of MPP sensitivities found using three sensitivity analysis approaches Sensitivity Finite Difference Complex Step Lagrange Multiplier Theorem N/A N/A N/A N/A Deterministic and Reliability-Based Design Optimization Results Having validated the sensitivity analysis procedure, this approach can then be used to solve a full RBDO problem. For comparison, deterministic optimization is also performed with an identical factor of safety applied to each constraint. Each analysis uses MATLAB s Sequential Quadratic Programming routine to control the design optimization process, with the problem configured using the parameters listed in Table 4. Table 4: Summary of parameters for the deterministic optimization versus RBDO study Parameter Mean Value Standard Deviation (initial) 5in 2 N/A (initial) 2in 2 N/A 0.001in 2 N/A 10in N/A 30,000lb 4,500lb psi psi, 5,000psi N/A, 20,000psi N/A 0.002in N/A 0.002in N/A 51

64 A total of 5 scenarios were considered. First, the deterministic optimization problem defined in Equation (94) was solved with a factor of safety of 1 applied to all constraints. Then, the deterministic optimization was repeated with a factor of safety of 1.5 applied to all constraints. Lastly, reliability-based design optimization was performed for three target reliability indices: 1.5, 2.0, and 3.0. These analyses used the sensitivity analysis calculations described in the previous section for every iteration of the design optimization loop. Additionally, the probability of failure of each constraint as well as the system level probability of failure was then computed using Monte Carlo simulation with 10 6 random samples for all designs. The designs obtained using these 5 analyses are compared below in Table 5. Table 5: Comparison of 3-bar truss designs using deterministic optimization and RBDO Maximum Component System Design Approach Volume (MCS) (MCS) Det., Det., RBDO, RBDO, RBDO, The advantages of using reliability-based design optimization are clearly demonstrated in Table 5. First, RBDO generated a novel design. While the deterministic solutions utilized all 3 bars, the reliability-based design approach found that the probabilistic performance of the structure was dominated by the diagonal bars. Therefore, the cross-sectional area of the middle bar was quickly driven to its lower bound as the target reliability index increased. More interestingly, RBDO found a design which was not only more reliable, but also more optimal with respect to volume than the deterministic designs. Specifically, comparison of the deterministic solution for 1.5 and the RBDO solution for 2.0 shows that RBDO produced a design which had a 2% lower volume but more importantly reduced the system probability of failure from to , a 16% reduction. Note that RBDO only constraints the component probability of failure, which was reduced by 56%. 52

65 To explore the cause of this, the probability of failure for each constraint obtained using the Monte Carlo simulation above can be compared, as shown in Table 6. In the deterministic solution, the design was constrained by the stress in the first bar and the horizontal displacement of the joint. The likelihood of each constraint being violated was vastly different however, with the stress constraint having a probability of failure which was 2 orders of magnitude less than that of the displacement constraint. This indicates that while a factor of safety of 1.5 was sufficient for the horizontal displacement, this was too conservative for the stress constraint. The result is a design which is may be optimal with respect to deterministic criteria, but is suboptimal with respect to the stochastic performance of the system. In the RBDO solution, both of the displacement constraints were active with a FORM reliability index of 2 while all of the stress constraints were inactive. In comparison to the deterministic solution, the probability is more equally distributed between the constraints. In particular, the excess safety of the stress in bar 1 has been reduced in order to increase the reliability of other constraints. This highlights a key advantage of RBDO: excess component safety is redistributed in order to improve the overall system reliability. Table 6: Comparison of constraint behavior for similar deterministic and RBDO designs Design Approach Probability of Constraint Violation (Bold Indicates Active Constraint),,, Det., < < RBDO, < < It should be noted that two brief studies were performed to further investigate these results. First, a study was performed to verify that the deterministic approach was not converging to a different, less optimal minima. Specifically, the deterministic optimization procedure was repeated with the initial design point chosen to be the 2.0 solution: 8.236, Despite starting at the RBDO solution, the deterministic optimization procedure still converged to the design 7.500, Second, a brief study was performed to investigate why the probabilities of failure of the active constraints were lower than expected. Specifically, Equation (20) predicts a probability of failure for 2.0 of , while the MCS solutions were and As shown by Figure 11, the probability distribution for the horizontal displacement 53

66 constraint obtained using Monte-Carlo simulation is not Gaussian. Since the probability density function of a linear sum of normally distributed parameters is exactly normal, the non-gaussian distribution indicates that the failure surface is actually non-linear. Equation (20) assumes that the constraint is exactly linear with respect to the random parameters in the -space however, based on a linear expansion using the slope at the MPP. Thus, any non-linearity in the failure surface will result in an error in the estimate of the probability of failure. Figure 11: MCS distribution for the 2.0 horizontal displacement constraint 54

67 4.3 Topology Optimization of a Benchmark Problem Problem Statement Having demonstrated that the sensitivity analysis procedure derived from the Lagrange Multiplier Theorem can be used to solve a double-loop reliability-based design optimization problem, this approach can be extended to reliability-based topology optimization. Fundamentally, the only difference between RBTO and RBDO is the size and scope of the problem. Because of the potentially thousands or more design variables however, RBTO requires every step of the optimization to be as computationally efficient as possible. Since the proposed procedure provides analytic sensitivities by solving a linear system of equations but without requiring additional solutions of the nested FORM problem, it has potential applications in the field of reliability-based topology optimization. These applications will be investigated by first solving the RBTO benchmark problem developed by Rozvany and Maute [1]. Since the benchmark problem has an analytic solution, it will also serve to validate the RBTO routine developed for this research. The benchmark problem consists of a rectangular region of material of unknown topology. The bottom edge is fixed while a deterministic vertical load and random horizontal load are applied at the midpoint of the top edge. The horizontal load is normally distributed with a mean of 0 and a standard deviation. This configuration is depicted in Figure 9. Rozvany and Maute developed an analytic solution for the minimum weight topology which satisfies a constraint on the probability that the compliance does not exceed a critical value. This derivation did not require any assumptions about the shape of the structure. A brief summary of the results of their derivation follows. Using an optimal layout theory approach, the minimum volume topology satisfying a constraint on the reliability of the compliance is a two bar truss which is symmetric about the vertical axis, depicted in Figure 12. The angle of the truss is related to the standard deviation of the horizontal force,, and the target reliability level,, through a multi-step process. First, a critical value of the horizontal load,, can be defined:, Φ (106) 55

68 The optimal angle can then be expressed in terms of the angle between and the deterministic vertical load,. This angle, Β, is given by: Βtan, (107) The optimal angle is then computed using: αtan tan Β8tan Β tan Β 4 (108) 0, Figure 12: Analytic solution for the RBTO benchmark problem [1] An equivalent RBTO problem can then be defined using the SIMP method for the topology optimization, discussed in Section 2.5.1, with the linear density filtering approach discussed in Section This requires the discretization of the region into a finite element mesh. Since the design region is rectangular, it is convenient to use a structured grid of linear quadrilateral plane-stress elements. For simplicity, each element is assumed to be identically rectangular with a width of Δ, a height of Δ, and a thickness. Thus, the structured mesh will be a grid of rows with elements and columns with elements, each with an associated pseudo-density. This discretization is visualized in Figure

69 0, Δ elements Δ elements Figure 13: Discretization of the RBDO benchmark problem The objective of the equivalent RBTO formulation is to minimize the weight of a structure of unknown topology. Since each element has identical geometry, this is equivalent to the minimization of the sum of the pseudo-densities. The topology is constrained to have a minimum reliability level against the compliance exceeding a critical value,. Therefore, the RBTO problem is given by: min s. t. 0 (109) Where is computed by solving a nested FORM problem each iteration: min s. t. 0 (110) In the present study, RBTO is implemented in MATLAB using the Method of Moving Asymptotes [14] to control the design problem and MATLAB s Interior-Point algorithm [13] to solve the nested FORM problem each iteration. Both methods were chosen since their computational costs scale well for large numbers of design variables. Additionally, the Method of Moving Asymptotes is already a well-established approach for reliability-based topology 57

70 optimization, having been employed in several studies previously [1, 25, 26, 28]. The design problem is formulated using the unit-less values listed in Table 7. In general, these values are identical to those used by Rozvany and Maute [1], with the exception that the length to height ratio of the design region is reduced from 5:1 to 3:1. It is not expected that this will affect the results since Rozvany and Maute s solution has a considerable amount of void on each side of the truss structure. This results in a new mesh of square plane-stress elements, requiring a total of 6,912 pseudo-densities. It should also be noted that similar to Rozvany and Maute, a Poisson s ratio of 0 is used to approximate the behavior of the bars in the analytic solution. It is important to note that the nested problem requires multiple solutions of the finite element analysis, though the computational cost is offset slightly by the fact that the stiffness matrix does not need to be reassembled each iteration of the nested problem since the only random parameter is a force. To avoid repeated solutions of the FORM problem, and therefore minimize the number of finite element solutions required, the sensitivity analysis procedure proposed in Section 3.2 is used. Its specific implementation is discussed in the following section. Table 7: Summary of unit-less parameters for the RBTO benchmark problem Parameter Mean Value Standard Deviation 100 N/A 0 100/3 144 N/A 48 N/A Δ 1/48 N/A Δ 1/48 N/A 1 N/A 1/24 N/A 10 6 N/A 10 N/A 0 N/A 3 N/A 58

71 4.3.2 Sensitivity Analysis Topology optimization problems can have thousands or more design variables. This necessitates analytic sensitivities, since finite differencing even a simple constraint can become computationally expensive if it has to be repeated many times over. In the present analysis, the sensitivities of the reliability index to the pseudo-densities are computed using Equation (28). This requires the sensitivities of the MPP to the pseudo-densities, which are obtained using the procedure described in Section 3.2. For efficiency, this procedure requires analytic expressions for several derivatives and second-derivatives. While the number of design variables suggests that obtaining expressions for each would be a futile effort, the coupling of the pseudo-densities with the finite element problem simplifies this considerably. Specifically, a given derivative can be expressed using a single equation for all pseudo-densities and/or the random loads by taking advantage of the direct and adjoint approaches discussed in Section The simplest example of this is the derivative of the compliance with respect to each pseudo-density. A complete derivation of this is presented below in order to demonstrate how these approaches can be applied. Starting with the definition of compliance from Equation (56): (111) The derivative with respect to a given pseudo-density is given by: (112) Equation (112) requires the derivative of the global displacement matrix with respect to a given pseudo-density. The direct method can be used to obtain these derivatives, which are given by Equation (34). After substituting this expression in and considering that the force vector is not a function of the pseudo-densities, the following expression is obtained: (113) After further simplification using various matrix properties, this reduces to: 59

72 (114) As a last step, this can be further simplified by considering that the pseudo-density only multiplies a single element-level stiffness matrix, as shown in Equation (58). Therefore, the derivative of the global stiffness matrix reduces to the derivative of the element level stiffness matrix: (115) Where is the locator matrix for element. After substituting Equation (115) into Equation (114), the expression for the derivative of the compliance with respect to any pseudo-density is obtained: (116) Where is a column matrix of the global displacements for the nodes of element. The remaining derivatives can be computed using similar procedures. A full derivation of these partial derivatives is included in Appendix A. The resulting expressions which are applicable for all pseudo-densities are given below. First, the derivative of the compliance with respect to each random load is given by: 2 (117) Where is the displacement of the nodal degree of freedom matching the location and direction of the random force. Using the adjoint method, the mixed second derivative for any pseudo-density and random force combination reduces to the expression below: 2 (118) 60

73 Where is a column vector analogous to the element-level nodal displacement matrix, except instead containing the terms from the th column of. Lastly, the Hessian of the compliance with respect to any pair of random nodal forces is given by: 2 (119) Where is the term of the inverse stiffness matrix whose global indices match those of the two random forces. It should be noted that several of these equations require the inversion of the stiffness matrix, which is typically a computationally expensive procedure. It should be possible to derive equivalent expressions for these derivatives which do not require this inversion, but instead use the solution of a system of linear equations. The sensitivity analysis was validated by comparing the sensitivities of the reliability index obtained using the Lagrange Multiplier Theorem approach to those obtained using finite difference derivatives for a range of step sizes. To save on computation costs, a coarser problem with 30 horizontal elements and 10 vertical elements was used, giving a total of 300 design variables. Additionally, only the first 5 iterations of the RBTO routine were computed. Otherwise, the problem formulation was identical to the full scale problem. The maximum difference between the analytic and finite difference sensitivities for each iteration and step size is summarized in Table 8. As expected, these results were step-size dependent, with a step of 10-7 providing the best agreement. At this step size, the derivatives agreed to within The maximum difference occurred in the second iteration, where the values of the analytic derivatives ranged from in the void regions to at the applied load. While the difference is within an order of magnitude of the smallest sensitivity, this is acceptable since the maximum sensitivity is 6 orders of magnitude greater. Since the sensitivity of the void regions is orders of magnitude less than the sensitivity elsewhere, these densities should still be driven to their lower bounds even if there is a 10% error in their values. It is also important to note that some discrepancy is expected as a result of the step size and subtractive errors associated with finite difference derivatives, discussed in Section Overall, the good agreement between the analytic and finite difference sensitivities validates the analytic sensitivity analysis procedure. 61

74 Table 8: Comparison of finite difference and analytic sensitivities for the RBTO benchmark Step Size Maximum of for each iteration To further explore the differences in the analytic and finite difference sensitivities, the variation in the sensitivities across the topology for each approach can be plotted. Since the sensitivities vary by multiple orders of magnitude across the topology, a lognormal scale is used. The analytic sensitivities for the 4 th iteration of the design are mapped in Figure 14 while the finite difference sensitivities computed with a step size of 10-7 are mapped in Figure 15. Comparison of the two mappings shows very good agreement between the sensitivities across the topology. The only significant differences are in the upper left and right corners, however these regions also have the minimum values of the sensitivities. Therefore, the error in these regions is not expected to influence the design. Lastly, a map of the absolute differences in the sensitivities computed using the proposed analytic method and the finite difference method with a step size of 10-7 can be constructed, depicted in Figure 16. This shows that the errors were distributed randomly across the structure, with no correlation to whether the topology was a void or dense region. 62

75 Figure 14: 4 th Iteration analytic sensitivities on a lognormal scale Figure 15: 4 th Iteration finite difference sensitivities for 10-7 on a lognormal scale 63

76 Figure 16: Absolute difference between the analytic and finite difference sensitivities Solution versus Reliability Index RBTO was initially performed using reduced-scale problems to verify that the routine was converging towards the analytic solution. Unexpectedly, these tests converged to asymmetric designs, such as the topology depicted in Figure 17. This topology was obtained after 1,000 iterations of a 6020 grid of elements with a target reliability index of 3. The optimization was terminated at this point because the number of iterations reached its cap, however as shown by Figure 18 the changes in the objective function were minimal at this point. Similarly, the topology had converged to a design which satisfied the reliability constraint, as shown by Figure 19. While these convergence histories appear to be smooth, in reality the optimization procedure was oscillating. This can be seen from the norm of the residuals of the KKT conditions plotted in Figure 20. Figure 17: RBTO benchmark topology after 1,000 iterations with 3 64

77 Figure 18: Convergence of the weight of the RBTO benchmark topology with 3 Figure 19: Convergence of for the RBTO benchmark topology with 3 65

78 Figure 20: Norm of the residual of the KKT Equations for the RBTO benchmark with 3 Upon further investigation, it was found that this oscillation is related to the symmetry of the problem. Specifically, the loads and boundary conditions are symmetric about the vertical midplane of the design region, as shown in Figure 13. This includes the random horizontal force, which is normally distributed with zero mean, implying that a given realization is as equally probable as the equivalent with the opposite sign. Furthermore, this implies that compliance is at a minimum at the mean value and will always increase regardless of whether the random force has a positive or negative realization. Thus, there are two MPPs in any given iteration, one associated with a positive value of the force and another associated with a negative value. For a topology which is symmetric about the midplane, such as the analytic solution or the evenly distributed densities used for the initial topology, these MPPs will hold the exact same value but opposite signs. There are also two distinct failure regions: one with all force realizations less than the negative MPP and the other with all force realizations greater than the positive MPP. Therefore, linear expansion about one MPP only captures one failure region, resulting in the reliability of the system being significantly underestimated. For a fully symmetric topology, the probability of failure computed using Equation (20) is exactly half that of the actual value. 66

79 The MPP is obtained by solving a minimization problem, as detailed in Section Therefore, only a single MPP is computed each time the FORM subroutine is called. This occurs once per iteration of RBTO benchmark problem formulation, given by Equation (109), so only one of the two MPPs is considered when formulating the next iteration of the design. From the perspective of the optimizer, this is equivalent to optimizing for a one-sided random load in a given iteration, which encourages an asymmetric design. The challenge is that in the next iteration, the MPP may switch, as shown in Figure 21. If too big of a step was taken however, the topology may not return to a symmetric design after the new iteration, particularly if the movelimits are adjusted between iterations. This behavior can be seen by plotting the first 10 iterations of the asymmetric solution, depicted in Figure 22. Thus, the topology gradually moves towards an asymmetric solution which satisfies the reliability constraint, but is not optimal. Figure 21: Oscillations of the MPP for the RBTO benchmark with 3 67

80 Figure 22: First 10 iterations of the asymmetric RBTO solution To avoid this behavior, the problem formulation was modified to consider both MPPs by adding a second reliability constraint. Both reliability constraints use the formulation from Equation (110), however one constraint considers the reliability with respect to the positive MPP while the other considers the reliability with respect to the negative MPP. The new formulation for the design loop is given in Equation (120): 68

81 min Subject to: 0 (120) 0 Where is the reliability index for the positive MPP and is the reliability index for the negative MPP. To ensure that the correct MPP was found, the initial analysis point for each FORM problem was chosen to be equal to standard deviations away from the mean value of in either the positive or negative direction. In general, this formulation encourages symmetry, however it requires twice the computational effort. It is important to note that the sensitivity analysis procedure detailed in Section is still applicable, though it needs to be repeated twice. As shown in Figure 23, after modification a topology which was qualitatively similar to the analytic solution was obtained. Since the analytic solution assumes a continuum while RBTO is discretized, it was expected that there would be small discrepancies between the two results. Furthermore, repeating the analysis with the previously found asymmetric solution (Figure 17) as the initial design point with 3 converges to the expected topology, as shown by Figure 24. This indicates that the asymmetric solution was not actually a local minimum. This also demonstrates that the modified approach is robust, since it was able to converge without assuming that the initial design is symmetric. Figure 23: Topology obtained for 3 after reformulation to consider multiple MPPs 69

82 Figure 24: Topology found using the modified approach starting at the asymmetric solution As shown by Equations (106) through (108), the truss angle is expected to increase as the required reliability level increases. To verify this behavior, the modified RBTO benchmark problem was solved for 1, 2,3,4,5, and 6. The resulting topologies are depicted in Figure 25. Qualitatively, these topologies demonstrated the expected behavior, with the trusses becoming wider as is increased Figure 25: Solutions to the RBTO benchmark problem for different reliability indices 70

83 ,, Figure 26: Estimation of for the RBTO benchmark problem with 4 To verify the present results, a rough estimate of the angle from the vertical,, was obtained for each topology by drawing an approximate line of best fit on one of the truss members. An example of this for the 4 topology is depicted in Figure 26. Two points,, and,, were then used to approximate the slope of the line. The slope of the line can then be related to the truss angle using Equation (121): tan (121) The approximate angles found for each topology using Equation (121) are compared to the exact solutions obtained using Equations (106) through (108) in Table 9. As expected, the approximated angles differ by a small amount from the exact solutions. This is likely a result of the discretization as well as errors resulting from the choice of the fitting points. The general trend versus correlates well with the exact solution however, thus validating the RBTO routine. 71

84 Table 9: RBTO benchmark reliability indices versus approximated truss angle As a last step, Monte Carlo simulation was performed for the 2 solution to verify that the topology has the required reliability level. Using Equation (20), the expected probability of failure for one of the MPPs is Therefore, the expected system probability of failure is Using Monte Carlo simulation of the optimal topology with 10,000 samples, a probability of failure of was obtained, confirming that the RBTO procedure converged to a reliable topology. The compliance constraint as a function of the horizontal load can also be mapped, as shown in Figure 27. This confirmed the symmetric behavior of the compliance and the presence of multiple MPPs. The probability density function for the compliance constraint can also be approximated using the samples, as shown in Figure 28. Overall, this analysis proved that the proposed sensitivity analysis procedure can be incorporated into reliability-based topology optimization problems. In the process, an interesting behavior related to the symmetry of the reliability problem was observed. This required a modification to the problem formulation so that the reliability with respect to multiple MPPs was considered. It is important to note that the modification required knowledge of the existence of multiple MPPs a priori however. Therefore, this procedure is not universally applicable. 72

85 Figure 27: Symmetry of the compliance constraint for the RBTO benchmark with 2 Figure 28: Probability density function for the compliance constraint obtained using MCS 73

86 4.4 Multiple Random Loads Problem Statement The sensitivity analysis procedure for the RBTO benchmark problem, derived in Section 4.3.2, is applicable for any number of random loads. To verify this, a new reliability-based topology optimization problem with two random forces was defined. Similar to the benchmark problem, the design region is a rectangular grid of linear quadrilateral plane-stress elements. The boundary conditions differ from the benchmark problem however. In the new formulation, the left edge of the region is fixed so that the plate is cantilevered. The two random loads and are applied vertically along the centerline of the region at ¾ of the width and the full width. This configuration is depicted in Figure 29. While the boundary conditions are different, the formulation of the optimization problem is unchanged from the benchmark problem. For clarity, the design problem and nested FORM problem are repeated below. The RBTO problem is given by: min s. t. 0 (122) Where is computed by solving a nested FORM problem each iteration: min s. t. 0 (123) The symmetry problems encountered in the RBTO benchmark are avoided by specifying nonzero-means and relatively small standard deviations. Therefore, while there may be solutions for the compliance constraint corresponding to values of the forces with the opposite sign, these realizations will be exceedingly unlikely and have little bearing on the reliability of the topology. The unit-less values used for all parameters in this analysis are summarized in Table

87 Δ 3/4 Δ /2 Figure 29: Problem formulation for RBTO with two random loads Table 10: Summary of unit-less parameters for the multiple random loads RBTO problem Parameter Mean Value Standard Deviation (Case 1) or 20 (Case 2) N/A 144 N/A 48 N/A Δ 1/48 N/A Δ 1/48 N/A 1 N/A 1/24 N/A N/A 1 N/A 0 N/A 3 N/A 75

88 A secondary objective of this analysis is to compare the topologies obtained using the deterministic and reliability-based approaches. As demonstrated in Section 4.2 with a 3-bar truss, RBDO can lead to novel designs which may be more reliable and/or optimal than those found using deterministic optimization, depending on how the deterministic problem is formulated. It is expected that reliability-based topology optimization will have similar benefits with respect to deterministic topology optimization. To investigate this, a deterministic formulation of the multiload problem will also be solved using several different factors of safety then compared to the RBTO solution. The deterministic formulation will use the same boundary conditions and parameters as the RBTO formulation, with the random loads replaced by deterministic equivalents with magnitudes equal to the mean values of the random loads. The problem formulation for the deterministic topology optimization is given in Equation (124). min s. t. 0 (124) Deterministic topology optimization was initially performed using the Method of Moving Asymptotes (MMA) [14], however it was observed that this approach tended to converge to infeasible solutions for this specific problem, regardless of how the compliance constraint was scaled. To avoid this behavior, MATLAB s implementation of the interior-point method [13] was used instead. This method was chosen since it incorporates a barrier function to stay in the feasible region of the design space. As a trade-off, the pseudo-densities will tend to be near but not exactly at their bounds. For example, a typical pseudo-density found at the root of the structure using MMA would be 1.000, while the interior-point method would converge to instead. Sensitivity analysis was performed using the expression for the sensitivities of the compliance to any pseudo-density given by Equation (116) 76

89 4.4.2 RBTO Solutions The sensitivities were first verified using a reduced-scale problem. This problem used a 18x6 grid of elements, giving a total of 108 design variables. The finite difference approach was used to generate a baseline. A step size of 10-7 was chosen for this analysis since this step size provided the most accurate derivatives in the step-size study performed for the RBTO benchmark problem, detailed in Table 8. With this step size, the maximum difference between the finite difference and analytic sensitivities in the second iteration of the design was , indicating that there was very good agreement between the two methods. This proves that the sensitivity equations derived in Section for the RBTO benchmark problem are applicable for any compliance constrained formulation with random loading. Having validated the sensitivity analysis procedure, RBTO was then performed for the deflection-limited cantilever structure with the full-scale mesh. Two cases were considered, each with a target reliability index of 3 but different standard deviations of the applied forces. In the first case, both forces had a standard deviation 10. The topology obtained using RBTO is depicted in Figure 30. This topology resembles a truss structure composed of relatively few but thick members which is symmetric about its horizontal mid-plane with a total weight of 2,305. Since there are 6,912 pseudo-densities in the full-scale problem, this indicates that only 33% of the material was used. Figure 30: RBTO solution for two random loads with 10 and 10 77

90 For the second case, the standard deviation of was increased from 10 to 20, which is equivalent to 10% of the mean value of. The standard deviation of remained unchanged. The resulting topology for 3 is depicted in Figure 31. This topology has a weight of 2,547 which is an increase of 10% over the RBTO solution for 10. Interestingly, this topology consisted of many narrow members, unlike Case 1. This type of complex structure is typical of many topology optimization problems. Figure 31: RBTO solution for two random loads with 20 and Comparison to Deterministic Solutions for Various Factors of Safety Deterministic topology optimization was performed for 3 different factors of safety: 1.25, 1.375, and 1.5. The resulting topologies are depicted in Figure 32, Figure 33, and Figure 34 respectively. Comparison of the deterministic topologies shows that increasing the factor of safety altered the topology by adding more X intersections to the structure. A factor of safety of 1.25 had 2 distinct intersections while 1.5 had 3. The topology for appears to be a transition, with a 3 rd intersection of reduced size nested between two larger intersections. Interestingly, the topology for bears a strong resemblance to the RBTO solution for 10 and 10, depicted in Figure 30. This demonstrates that deterministic approaches can still produce reliable designs, however there is no guarantee of this unlike reliability-based approaches. It is also important to note that none of the deterministic topologies are similar to the RBTO solution with the increased standard deviation, depicted in Figure

91 Figure 32: Deterministic topology for the multiple-load problem with 1.25 Figure 33: Deterministic topology for the multiple-load problem with Figure 34: Deterministic topology for the multiple-load problem with

92 Monte-Carlo simulation using 10,000 samples was then performed to compare the reliability of the deterministic topologies to that of the RBTO solutions. This analysis was performed twice for each design: once with 10 and then again with 20, 10. The weight and probabilities of failure for the deterministic and reliability-based topologies are summarized in Table 11. Table 11: Comparison of deterministic and reliability-based topologies for multiple loads Design Approach Weight (MCS), (MCS), (FORM), (FORM), Case 1 Case 2 Case 1 Case 2 Det., , Det., , Det., 1.5 2,609 < RBDO, 3.0, Case 1 2, N/A N/A RBDO, 3.0, Case 2 2,547 N/A N/A The FORM approximation of the probability of failure for 3.0 computed using Equation (20) is As expected, both RBDO solutions had similar probabilities of failure. The deterministic solutions had a wide range of probabilities. Of note is that the deterministic topology for 1.5 (Figure 34) had a similar performance to the RBDO Case 2 design (Figure 31) when 20 and 10 despite the dissimilar topologies. This demonstrates the ability of RBTO to find novel topologies. More importantly, by using reliability-based topology optimization it was possible to obtain this result after a single analysis instead of using a trial and error approach similar to that required to obtain the equivalent deterministic topology Comparison of Analytic and Finite Difference Computation Times A final objective of the multiple random load analysis is to investigate the savings in computation time achieved using the proposed sensitivity analysis approach. To do this, the first iteration for the RBTO problem with multiple random loads was re-solved twice for several different mesh sizes. The first solution utilized the sensitivity analysis procedure developed in Section 3.2. The second solution utilized the finite difference approach, discussed in Section 2.4.1, which required re-solving the FORM problem once for every pseudo-density. It is 80

93 assumed that every MPP found when a pseudo-density is perturbed will be in the neighborhood of the original MPP, so for efficiency the finite difference derivatives used the baseline MPP as the initial point when solving the FORM problem. Both analyses used the parameters for Case 2, summarized in Table 10, with the exception of the number of horizontal and vertical elements. The meshes considered and the resulting computation times are summarized in Table 12. It is important to note that the computation time for each method includes the time required to solve the initial FORM problem and that parallelization of the finite difference derivatives was not considered. Table 12: Analytic and finite difference computation times versus mesh density Mesh Number of Analytic Computation Finite Difference Percent Pseudo-Densities Time (s) Computation Time (s) Reduction Comparison of the computation times shows that the analytic approach required a far lower computation time, even for very coarse meshes. For example, the time required to solve a single iteration of the RBTO algorithm for a 3010 mesh was reduced by 50s by switching to analytic derivatives. This is equivalent to a reduction of over 90%. Furthermore, the difference in the computation costs rapidly increased as the number of pseudo-densities increased, with a 6020 mesh requiring 890s less when using analytic derivatives. Assuming 26s per iteration, this indicates that over 35 iterations of the RBTO algorithm can be completed using the proposed sensitivity analysis procedure for every iteration of the finite difference algorithm given the 6020 mesh. Clearly, the proposed sensitivity analysis procedure is more efficient. To visualize the efficiency of the proposed sensitivity analysis procedure, the computation time as a function of the number of pseudo-densities can be plotted, as shown in Figure 35. This clearly depicts the rapid divergence in the computation cost between the 81

94 approaches. Specifically, the finite difference approach exhibits a non-linear increase in the computation cost which resembles a cubic function. This is unlike the analytic sensitivity procedure, which shows a more linear increase in the computation time with far slower growth. Thus, the analytic procedure is expected to become increasingly efficient as the size of the reliability-based topology optimization problem is increased. Overall, the number of pseudodensities considered in this study is small compared to that of many topology optimization problems, yet there are already significant time savings achieved using the new approach. For these larger problems, the computation cost of finite difference derivatives is expected to be impractically large based off of the observed trends, however the computational challenge can be avoided by using the proposed sensitivity analysis procedure Computation Time for One Design Iteration (s) Number of RBTO Psuedo-Densities Analytic Finite Difference Figure 35: Computation time for one design iteration versus number of pseudo-densities 82

95 4.5 RBTO of a Deflection-Limited Cantilever Structure Problem Statement The reliability-based benchmark problem developed by Rozvany and Maute [1] is an ideal test case because of the simplicity of the volume objective and compliance constraint as well as the availability of an analytic solution. The benchmark problem does not represent a realistic implementation of reliability-based topology optimization however. Compliance is typically used as an objective rather than a constraint since it is equivalent to the average strain energy density of the structure. Thus, minimization of the compliance serves as an efficient proxy for the stiffness of the structure, since a structure with a lower strain energy has deformed less and is therefore stiffer. Introducing compliance as a constraint requires specifying a target value. The challenge is that it is not easy to correlate a value of the compliance to measurable properties such as the deflection of the structure. Therefore, a more useful problem would be to minimize the volume of a topology subject to a constraint on the displacement of a key location. In this analysis, a cantilever plate structure will be constrained by the vertical displacement of the tip of the plate, designated point. Specifically, the reliability against the vertical displacement at point,, exceeding a critical value must be greater than a target reliability index of 3, which is equivalent to a probability of failure of There will be a normally-distributed vertical force applied at point while the opposite edge of the plate is fixed. Similar to the previous examples, the design region will be modeled as a rectangular grid of linear quadrilateral plane-stress elements, each with an associated pseudodensity. This problem formulation is depicted in Figure 36. The objective of the deflection-limited RBTO problem is to minimize the weight of the structure given the constraint on the reliability of the tip displacement. The weight is equivalent to the sum of the pseudo-densities while the reliability can be assessed using the First-Order Reliability Method. In standard form, the RBTO problem is given by: min s. t. 0 (125) 83

96 Where is computed by solving a nested FORM problem each iteration: min s. t. 0 (126) The column vector in Equation (126) is chosen to be everywhere zero except for the term with the same global index as, which has a value of 1. Therefore, isolates the displacement of interest from the global displacements vector. The nested constraint is expressed in this manner so that the adjoint method, detailed in Section 2.4.3, can be used to efficiently obtain the derivatives of with respect to the pseudo-densities and the random load. Further details of the sensitivity analysis for the random displacement problem are presented in Section A summary of the parameters used for this analysis is presented in Table 13. These parameters are similar to those used in the RBTO benchmark, so it is important to note that the values do not correlate to any particular unit system. However, this problem can easily be reformulated to consider realistic values for the material properties and loading. Δ Δ /2 Figure 36: Problem formulation for the deflection-limited RBTO problem 84

97 Table 13: Summary of unit-less parameters for RBTO of a deflection-limited structure Parameter Mean Value Standard Deviation /3 3 or 4 N/A 144 N/A 48 N/A Δ 1/48 N/A Δ 1/48 N/A 1 N/A 1/24 N/A 45,000 N/A 1 N/A 0 N/A 3 N/A Sensitivity Analysis Reliability-based topology optimization requires efficient sensitivity analysis because repeated solutions of the nested FORM problem quickly become computationally intractable as the number of pseudo-densities increases. Similar to the RBTO benchmark problem, the sensitivity analysis procedure proposed in Section 3.2 is used in this analysis to obtain analytic sensitivities of the reliability index with respect to the pseudo-densities. This procedure requires expressions for the first and second derivatives of the displacement constraint from Equation (126) with respect the pseudo-densities and the random nodal force. The adjoint method will be used to obtain general expressions for these sensitivities which are applicable for any pseudodensity or random force. Only the results of these derivations are presented in this section, however a similar derivation is presented in Section Full derivations for each partial derivative are also included in Appendix B. The first sensitivity required is the derivative of the displacement with respect to each pseudo-density, which can be derived using the adjoint approach discussed in Section The resulting expression for the derivative is given by Equation (127): 85

98 (127) Where is a column vector analogous to the element nodal displacement matrix, except containing terms from the column of with the same index as the constrained displacement,. The next derivative required is the sensitivity of the displacement constraint to the random loads. The adjoint method is also required to obtain a general expression for the sensitivity with respect to any random force, given below in Equation (128): (128) Where is the term of the inverse stiffness matrix with the same global indices as the constrained displacement and the random force. The mixed derivative can then be computed by taking the derivative of Equation (127) with respect to a random force. Similar to the other sensitivities, the adjoint method is required since Equation (127) is a function of the state variables, which in turn are function of the design variables and random parameters. The resulting expression is given in Equation (129). (129) Where is a column vector analogous to the element-level nodal displacement vector, except instead containing the terms from the column of with the same global index as. Lastly, every term in the Hessian of the displacement constraint with respect to any pair of random nodal forces is simply 0 since the first derivative of the displacement constraint, given in Equation (128), is a term from the inverse stiffness matrix, which is not a function of the random loads: 0 (130) 86

99 The sensitivities obtained using the proposed procedure were validated by comparing the analytic sensitivities of the reliability index with respect to each design variable to sensitivities obtained using finite difference derivatives with a step size of A reduced scale problem with an 186 mesh was used to obtain these sensitivities. Of the 108 sensitivities computed, the maximum difference between the analytic and finite difference sensitivities computed for the second iteration was This discrepancy is small compared to the magnitude of the sensitivities, with the largest being 0.269, a full 5 orders of magnitude greater. Thus, the impact of the discrepancies on the optimization process would be negligible. It is important to note that this discrepancy is most likely from the subtractive cancellation and step size errors associated with the finite different method RBTO Solution After validating the sensitivities, reliability-based topology optimization was performed on the full-scale mesh with target reliability indices of 3 and 4. The resulting topologies are depicted in Figure 37 and Figure 38 respectively. The topology optimized for 3 has a weight of 2,833, corresponding to 41% of the material available while the topology optimized for 4 is heavier, having a weight of 3,265, equivalent to 47% of the material available. Qualitatively, these topologies are similar to the topology obtained for the probabilistic compliance with two random loads, documented in Figure 30. While the loading was different for each problem, both topologies had a similar series of thick members intersecting each other in a manner resembling a truss structure. This demonstrates that compliance is a good approximation for the stiffness of a structure. Unlike the compliance based formulations however, the topologies obtained using the displacement formulation given by Equations (125) and (126) are optimized for an easily measurable property: the displacement of the tip of the cantilever structure. Overall, the solution of the displacement constrained problem demonstrated that the sensitivity analysis procedure proposed in Section 3.2 can be applied to a wide-range of reliability-based topology optimization problems. 87

100 Figure 37: Optimum topology for the deflection-limited cantilever plate problem with 3 Figure 38: Optimum topology for the deflection-limited cantilever plate problem with 4 88

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization Sirisha Rangavajhala

More information

Second-order shape optimization of a steel bridge

Second-order shape optimization of a steel bridge Computer Aided Optimum Design of Structures 67 Second-order shape optimization of a steel bridge A.F.M. Azevedo, A. Adao da Fonseca Faculty of Engineering, University of Porto, Porto, Portugal Email: alvaro@fe.up.pt,

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing

More information

Introduction to Design Optimization

Introduction to Design Optimization Introduction to Design Optimization First Edition Krishnan Suresh i Dedicated to my family. They mean the world to me. ii Origins of this Text Preface Like many other textbooks, this text has evolved from

More information

A Framework for Reliability Based Design Optimization of Curvilinearly Stiffened Panels

A Framework for Reliability Based Design Optimization of Curvilinearly Stiffened Panels 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA A Framework for Reliability Based Design Optimization of Curvilinearly Stiffened Panels Ali

More information

Topology Optimization of Two Linear Elastic Bodies in Unilateral Contact

Topology Optimization of Two Linear Elastic Bodies in Unilateral Contact 2 nd International Conference on Engineering Optimization September 6-9, 2010, Lisbon, Portugal Topology Optimization of Two Linear Elastic Bodies in Unilateral Contact Niclas Strömberg Department of Mechanical

More information

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane?

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? Intersecting Circles This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? This is a problem that a programmer might have to solve, for example,

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading 11 th World Congress on Structural and Multidisciplinary Optimisation 07 th -12 th, June 2015, Sydney Australia Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response

More information

CHAPTER 4. Numerical Models. descriptions of the boundary conditions, element types, validation, and the force

CHAPTER 4. Numerical Models. descriptions of the boundary conditions, element types, validation, and the force CHAPTER 4 Numerical Models This chapter presents the development of numerical models for sandwich beams/plates subjected to four-point bending and the hydromat test system. Detailed descriptions of the

More information

Introduction. Section 3: Structural Analysis Concepts - Review

Introduction. Section 3: Structural Analysis Concepts - Review Introduction In this class we will focus on the structural analysis of framed structures. Framed structures consist of components with lengths that are significantly larger than crosssectional areas. We

More information

Guidelines for proper use of Plate elements

Guidelines for proper use of Plate elements Guidelines for proper use of Plate elements In structural analysis using finite element method, the analysis model is created by dividing the entire structure into finite elements. This procedure is known

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Example 24 Spring-back

Example 24 Spring-back Example 24 Spring-back Summary The spring-back simulation of sheet metal bent into a hat-shape is studied. The problem is one of the famous tests from the Numisheet 93. As spring-back is generally a quasi-static

More information

INTERIOR POINT METHOD BASED CONTACT ALGORITHM FOR STRUCTURAL ANALYSIS OF ELECTRONIC DEVICE MODELS

INTERIOR POINT METHOD BASED CONTACT ALGORITHM FOR STRUCTURAL ANALYSIS OF ELECTRONIC DEVICE MODELS 11th World Congress on Computational Mechanics (WCCM XI) 5th European Conference on Computational Mechanics (ECCM V) 6th European Conference on Computational Fluid Dynamics (ECFD VI) E. Oñate, J. Oliver

More information

HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS

HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS An Undergraduate Research Scholars Thesis by DENISE IRVIN Submitted to the Undergraduate Research Scholars program at Texas

More information

Theoretical Concepts of Machine Learning

Theoretical Concepts of Machine Learning Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5

More information

11.1 Optimization Approaches

11.1 Optimization Approaches 328 11.1 Optimization Approaches There are four techniques to employ optimization of optical structures with optical performance constraints: Level 1 is characterized by manual iteration to improve the

More information

A nodal based evolutionary structural optimisation algorithm

A nodal based evolutionary structural optimisation algorithm Computer Aided Optimum Design in Engineering IX 55 A dal based evolutionary structural optimisation algorithm Y.-M. Chen 1, A. J. Keane 2 & C. Hsiao 1 1 ational Space Program Office (SPO), Taiwan 2 Computational

More information

Modelling Flat Spring Performance Using FEA

Modelling Flat Spring Performance Using FEA Modelling Flat Spring Performance Using FEA Blessing O Fatola, Patrick Keogh and Ben Hicks Department of Mechanical Engineering, University of Corresponding author bf223@bath.ac.uk Abstract. This paper

More information

CHAPTER 1. Introduction

CHAPTER 1. Introduction ME 475: Computer-Aided Design of Structures 1-1 CHAPTER 1 Introduction 1.1 Analysis versus Design 1.2 Basic Steps in Analysis 1.3 What is the Finite Element Method? 1.4 Geometrical Representation, Discretization

More information

[ Ω 1 ] Diagonal matrix of system 2 (updated) eigenvalues [ Φ 1 ] System 1 modal matrix [ Φ 2 ] System 2 (updated) modal matrix Φ fb

[ Ω 1 ] Diagonal matrix of system 2 (updated) eigenvalues [ Φ 1 ] System 1 modal matrix [ Φ 2 ] System 2 (updated) modal matrix Φ fb Proceedings of the IMAC-XXVIII February 1 4, 2010, Jacksonville, Florida USA 2010 Society for Experimental Mechanics Inc. Modal Test Data Adjustment For Interface Compliance Ryan E. Tuttle, Member of the

More information

17. SEISMIC ANALYSIS MODELING TO SATISFY BUILDING CODES

17. SEISMIC ANALYSIS MODELING TO SATISFY BUILDING CODES 17. SEISMIC ANALYSIS MODELING TO SATISFY BUILDING CODES The Current Building Codes Use the Terminology: Principal Direction without a Unique Definition 17.1 INTRODUCTION { XE "Building Codes" }Currently

More information

A Multiple Constraint Approach for Finite Element Analysis of Moment Frames with Radius-cut RBS Connections

A Multiple Constraint Approach for Finite Element Analysis of Moment Frames with Radius-cut RBS Connections A Multiple Constraint Approach for Finite Element Analysis of Moment Frames with Radius-cut RBS Connections Dawit Hailu +, Adil Zekaria ++, Samuel Kinde +++ ABSTRACT After the 1994 Northridge earthquake

More information

Search direction improvement for gradient-based optimization problems

Search direction improvement for gradient-based optimization problems Computer Aided Optimum Design in Engineering IX 3 Search direction improvement for gradient-based optimization problems S Ganguly & W L Neu Aerospace and Ocean Engineering, Virginia Tech, USA Abstract

More information

Truncation Errors. Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4.

Truncation Errors. Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4. Chapter 4: Roundoff and Truncation Errors Applied Numerical Methods with MATLAB for Engineers and Scientists, 2nd ed., Steven C. Chapra, McGraw Hill, 2008, Ch. 4. 1 Outline Errors Accuracy and Precision

More information

Support Vector Machines

Support Vector Machines Support Vector Machines SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions 6. Dealing

More information

Beams. Lesson Objectives:

Beams. Lesson Objectives: Beams Lesson Objectives: 1) Derive the member local stiffness values for two-dimensional beam members. 2) Assemble the local stiffness matrix into global coordinates. 3) Assemble the structural stiffness

More information

Robust Design Methodology of Topologically optimized components under the effect of uncertainties

Robust Design Methodology of Topologically optimized components under the effect of uncertainties Robust Design Methodology of Topologically optimized components under the effect of uncertainties Joshua Amrith Raj and Arshad Javed Department of Mechanical Engineering, BITS-Pilani Hyderabad Campus,

More information

Probability Models.S4 Simulating Random Variables

Probability Models.S4 Simulating Random Variables Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard Probability Models.S4 Simulating Random Variables In the fashion of the last several sections, we will often create probability

More information

Revision of the SolidWorks Variable Pressure Simulation Tutorial J.E. Akin, Rice University, Mechanical Engineering. Introduction

Revision of the SolidWorks Variable Pressure Simulation Tutorial J.E. Akin, Rice University, Mechanical Engineering. Introduction Revision of the SolidWorks Variable Pressure Simulation Tutorial J.E. Akin, Rice University, Mechanical Engineering Introduction A SolidWorks simulation tutorial is just intended to illustrate where to

More information

An explicit feature control approach in structural topology optimization

An explicit feature control approach in structural topology optimization th World Congress on Structural and Multidisciplinary Optimisation 07 th -2 th, June 205, Sydney Australia An explicit feature control approach in structural topology optimization Weisheng Zhang, Xu Guo

More information

Chapter 4: Implicit Error Detection

Chapter 4: Implicit Error Detection 4. Chpter 5 Chapter 4: Implicit Error Detection Contents 4.1 Introduction... 4-2 4.2 Network error correction... 4-2 4.3 Implicit error detection... 4-3 4.4 Mathematical model... 4-6 4.5 Simulation setup

More information

Finite Element Method. Chapter 7. Practical considerations in FEM modeling

Finite Element Method. Chapter 7. Practical considerations in FEM modeling Finite Element Method Chapter 7 Practical considerations in FEM modeling Finite Element Modeling General Consideration The following are some of the difficult tasks (or decisions) that face the engineer

More information

Hexahedral Mesh Refinement Using an Error Sizing Function

Hexahedral Mesh Refinement Using an Error Sizing Function Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2011-06-01 Hexahedral Mesh Refinement Using an Error Sizing Function Gaurab Paudel Brigham Young University - Provo Follow this

More information

Y9 Maths. Summative Assessment 1 hour written assessment based upon modules 1-5 during Autumn 2. Term Cycle 1

Y9 Maths. Summative Assessment 1 hour written assessment based upon modules 1-5 during Autumn 2. Term Cycle 1 Term Cycle 1 Whole Numbers and Decimals Powers of 10 Rounding Order of operations Multiples, factors, divisibility and prime numbers Prime factors, the HCF and the LCM Ordering decimals Estimating and

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions

More information

Varianzbasierte Robustheitsoptimierung

Varianzbasierte Robustheitsoptimierung DVM Workshop Zuverlässigkeit und Probabilistik München, November 2017 Varianzbasierte Robustheitsoptimierung unter Pareto Kriterien Veit Bayer Thomas Most Dynardo GmbH Weimar Robustness Evaluation 2 How

More information

ROTATIONAL DEPENDENCE OF THE SUPERCONVERGENT PATCH RECOVERY AND ITS REMEDY FOR 4-NODE ISOPARAMETRIC QUADRILATERAL ELEMENTS

ROTATIONAL DEPENDENCE OF THE SUPERCONVERGENT PATCH RECOVERY AND ITS REMEDY FOR 4-NODE ISOPARAMETRIC QUADRILATERAL ELEMENTS COMMUNICATIONS IN NUMERICAL METHODS IN ENGINEERING Commun. Numer. Meth. Engng, 15, 493±499 (1999) ROTATIONAL DEPENDENCE OF THE SUPERCONVERGENT PATCH RECOVERY AND ITS REMEDY FOR 4-NODE ISOPARAMETRIC QUADRILATERAL

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Free-Form Shape Optimization using CAD Models

Free-Form Shape Optimization using CAD Models Free-Form Shape Optimization using CAD Models D. Baumgärtner 1, M. Breitenberger 1, K.-U. Bletzinger 1 1 Lehrstuhl für Statik, Technische Universität München (TUM), Arcisstraße 21, D-80333 München 1 Motivation

More information

Truss structural configuration optimization using the linear extended interior penalty function method

Truss structural configuration optimization using the linear extended interior penalty function method ANZIAM J. 46 (E) pp.c1311 C1326, 2006 C1311 Truss structural configuration optimization using the linear extended interior penalty function method Wahyu Kuntjoro Jamaluddin Mahmud (Received 25 October

More information

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function.

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function. 1 The Lagrange multipliers is a mathematical method for performing constrained optimization of differentiable functions. Recall unconstrained optimization of differentiable functions, in which we want

More information

Comparative Study of Topological Optimization of Beam and Ring Type Structures under static Loading Condition

Comparative Study of Topological Optimization of Beam and Ring Type Structures under static Loading Condition Comparative Study of Topological Optimization of Beam and Ring Type Structures under static Loading Condition Vani Taklikar 1, Anadi Misra 2 P.G. Student, Department of Mechanical Engineering, G.B.P.U.A.T,

More information

Using Topology Optimization to Numerically Improve Barriers to Reverse Engineering

Using Topology Optimization to Numerically Improve Barriers to Reverse Engineering Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2013-05-15 Using Topology Optimization to Numerically Improve Barriers to Reverse Engineering Devin Donald LeBaron Brigham Young

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Effectiveness of Element Free Galerkin Method over FEM

Effectiveness of Element Free Galerkin Method over FEM Effectiveness of Element Free Galerkin Method over FEM Remya C R 1, Suji P 2 1 M Tech Student, Dept. of Civil Engineering, Sri Vellappaly Natesan College of Engineering, Pallickal P O, Mavelikara, Kerala,

More information

5. GENERALIZED INVERSE SOLUTIONS

5. GENERALIZED INVERSE SOLUTIONS 5. GENERALIZED INVERSE SOLUTIONS The Geometry of Generalized Inverse Solutions The generalized inverse solution to the control allocation problem involves constructing a matrix which satisfies the equation

More information

Recent Developments in the Design and Optimization of Constant Force Electrical Contacts

Recent Developments in the Design and Optimization of Constant Force Electrical Contacts Recent Developments in the Design and Optimization of Constant orce Electrical Contacts John C. Meaders * Stephen P. Harston Christopher A. Mattson Brigham Young University Provo, UT, 84602, USA Abstract

More information

Montana City School GRADE 5

Montana City School GRADE 5 Montana City School GRADE 5 Montana Standard 1: Students engage in the mathematical processes of problem solving and reasoning, estimation, communication, connections and applications, and using appropriate

More information

Step Change in Design: Exploring Sixty Stent Design Variations Overnight

Step Change in Design: Exploring Sixty Stent Design Variations Overnight Step Change in Design: Exploring Sixty Stent Design Variations Overnight Frank Harewood, Ronan Thornton Medtronic Ireland (Galway) Parkmore Business Park West, Ballybrit, Galway, Ireland frank.harewood@medtronic.com

More information

Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure

Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure In the final year of his engineering degree course a student was introduced to finite element analysis and conducted an assessment

More information

Topology Optimization of Multiple Load Case Structures

Topology Optimization of Multiple Load Case Structures Topology Optimization of Multiple Load Case Structures Rafael Santos Iwamura Exectuive Aviation Engineering Department EMBRAER S.A. rafael.iwamura@embraer.com.br Alfredo Rocha de Faria Department of Mechanical

More information

Revised Sheet Metal Simulation, J.E. Akin, Rice University

Revised Sheet Metal Simulation, J.E. Akin, Rice University Revised Sheet Metal Simulation, J.E. Akin, Rice University A SolidWorks simulation tutorial is just intended to illustrate where to find various icons that you would need in a real engineering analysis.

More information

A Verification Study of ABAQUS AC3D8R Elements for Acoustic Wave Propagation

A Verification Study of ABAQUS AC3D8R Elements for Acoustic Wave Propagation A Verification Study of ABAQUS AC3D8R Elements for Acoustic Wave Propagation by Michael Robert Hubenthal A Project Submitted to the Graduate Faculty of Rensselaer Polytechnic Institute in Partial Fulfillment

More information

X Std. Topic Content Expected Learning Outcomes Mode of Transaction

X Std. Topic Content Expected Learning Outcomes Mode of Transaction X Std COMMON SYLLABUS 2009 - MATHEMATICS I. Theory of Sets ii. Properties of operations on sets iii. De Morgan s lawsverification using example Venn diagram iv. Formula for n( AÈBÈ C) v. Functions To revise

More information

Design of auxetic microstructures using topology optimization

Design of auxetic microstructures using topology optimization Copyright 2012 Tech Science Press SL, vol.8, no.1, pp.1-6, 2012 Design of auxetic microstructures using topology optimization N.T. Kaminakis 1, G.E. Stavroulakis 1 Abstract: Microstructures can lead to

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

Support Vector Machines

Support Vector Machines Support Vector Machines . Importance of SVM SVM is a discriminative method that brings together:. computational learning theory. previously known methods in linear discriminant functions 3. optimization

More information

Year 8 Mathematics Curriculum Map

Year 8 Mathematics Curriculum Map Year 8 Mathematics Curriculum Map Topic Algebra 1 & 2 Number 1 Title (Levels of Exercise) Objectives Sequences *To generate sequences using term-to-term and position-to-term rule. (5-6) Quadratic Sequences

More information

Force density method for simultaneous optimization of geometry and topology of spatial trusses

Force density method for simultaneous optimization of geometry and topology of spatial trusses 25-28th September, 217, Hamburg, Germany Annette Bögle, Manfred Grohmann (eds.) Force density method for simultaneous optimization of geometry and topology of spatial trusses Kazuki Hayashi*, Makoto Ohsaki

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

MATLAB. Advanced Mathematics and Mechanics Applications Using. Third Edition. David Halpern University of Alabama CHAPMAN & HALL/CRC

MATLAB. Advanced Mathematics and Mechanics Applications Using. Third Edition. David Halpern University of Alabama CHAPMAN & HALL/CRC Advanced Mathematics and Mechanics Applications Using MATLAB Third Edition Howard B. Wilson University of Alabama Louis H. Turcotte Rose-Hulman Institute of Technology David Halpern University of Alabama

More information

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability 7 Fractions GRADE 7 FRACTIONS continue to develop proficiency by using fractions in mental strategies and in selecting and justifying use; develop proficiency in adding and subtracting simple fractions;

More information

Prentice Hall Mathematics: Pre-Algebra 2004 Correlated to: Colorado Model Content Standards and Grade Level Expectations (Grade 8)

Prentice Hall Mathematics: Pre-Algebra 2004 Correlated to: Colorado Model Content Standards and Grade Level Expectations (Grade 8) Colorado Model Content Standards and Grade Level Expectations (Grade 8) Standard 1: Students develop number sense and use numbers and number relationships in problemsolving situations and communicate the

More information

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem Computational Learning Theory Fall Semester, 2012/13 Lecture 10: SVM Lecturer: Yishay Mansour Scribe: Gitit Kehat, Yogev Vaknin and Ezra Levin 1 10.1 Lecture Overview In this lecture we present in detail

More information

1.2 Numerical Solutions of Flow Problems

1.2 Numerical Solutions of Flow Problems 1.2 Numerical Solutions of Flow Problems DIFFERENTIAL EQUATIONS OF MOTION FOR A SIMPLIFIED FLOW PROBLEM Continuity equation for incompressible flow: 0 Momentum (Navier-Stokes) equations for a Newtonian

More information

ITERATIVE COLLISION RESOLUTION IN WIRELESS NETWORKS

ITERATIVE COLLISION RESOLUTION IN WIRELESS NETWORKS ITERATIVE COLLISION RESOLUTION IN WIRELESS NETWORKS An Undergraduate Research Scholars Thesis by KATHERINE CHRISTINE STUCKMAN Submitted to Honors and Undergraduate Research Texas A&M University in partial

More information

Matija Gubec International School Zagreb MYP 0. Mathematics

Matija Gubec International School Zagreb MYP 0. Mathematics Matija Gubec International School Zagreb MYP 0 Mathematics 1 MYP0: Mathematics Unit 1: Natural numbers Through the activities students will do their own research on history of Natural numbers. Students

More information

AQA GCSE Maths - Higher Self-Assessment Checklist

AQA GCSE Maths - Higher Self-Assessment Checklist AQA GCSE Maths - Higher Self-Assessment Checklist Number 1 Use place value when calculating with decimals. 1 Order positive and negative integers and decimals using the symbols =,, , and. 1 Round to

More information

Modelling and Quantitative Methods in Fisheries

Modelling and Quantitative Methods in Fisheries SUB Hamburg A/553843 Modelling and Quantitative Methods in Fisheries Second Edition Malcolm Haddon ( r oc) CRC Press \ y* J Taylor & Francis Croup Boca Raton London New York CRC Press is an imprint of

More information

Prentice Hall Mathematics: Course Correlated to: Colorado Model Content Standards and Grade Level Expectations (Grade 6)

Prentice Hall Mathematics: Course Correlated to: Colorado Model Content Standards and Grade Level Expectations (Grade 6) Colorado Model Content Standards and Grade Level Expectations (Grade 6) Standard 1: Students develop number sense and use numbers and number relationships in problemsolving situations and communicate the

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Introduction. Co-rotational Concept

Introduction. Co-rotational Concept 1 2D Co-rotational Truss Formulation by Louie L. Yaw Walla Walla University April 23, 29 key words: geometrically nonlinear analysis, 2d co-rotational truss, corotational truss, variationally consistent,

More information

OPTIMIZATION METHODS

OPTIMIZATION METHODS D. Nagesh Kumar Associate Professor Department of Civil Engineering, Indian Institute of Science, Bangalore - 50 0 Email : nagesh@civil.iisc.ernet.in URL: http://www.civil.iisc.ernet.in/~nagesh Brief Contents

More information

Lecture VI: Constraints and Controllers. Parts Based on Erin Catto s Box2D Tutorial

Lecture VI: Constraints and Controllers. Parts Based on Erin Catto s Box2D Tutorial Lecture VI: Constraints and Controllers Parts Based on Erin Catto s Box2D Tutorial Motion Constraints In practice, no rigid body is free to move around on its own. Movement is constrained: wheels on a

More information

Linear Methods for Regression and Shrinkage Methods

Linear Methods for Regression and Shrinkage Methods Linear Methods for Regression and Shrinkage Methods Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Linear Regression Models Least Squares Input vectors

More information

SHAPE, SPACE & MEASURE

SHAPE, SPACE & MEASURE STAGE 1 Know the place value headings up to millions Recall primes to 19 Know the first 12 square numbers Know the Roman numerals I, V, X, L, C, D, M Know the % symbol Know percentage and decimal equivalents

More information

Chapter 2 Basic Structure of High-Dimensional Spaces

Chapter 2 Basic Structure of High-Dimensional Spaces Chapter 2 Basic Structure of High-Dimensional Spaces Data is naturally represented geometrically by associating each record with a point in the space spanned by the attributes. This idea, although simple,

More information

APS Sixth Grade Math District Benchmark Assessment NM Math Standards Alignment

APS Sixth Grade Math District Benchmark Assessment NM Math Standards Alignment SIXTH GRADE NM STANDARDS Strand: NUMBER AND OPERATIONS Standard: Students will understand numerical concepts and mathematical operations. 5-8 Benchmark N.: Understand numbers, ways of representing numbers,

More information

Chapter 3 Analysis of Original Steel Post

Chapter 3 Analysis of Original Steel Post Chapter 3. Analysis of original steel post 35 Chapter 3 Analysis of Original Steel Post This type of post is a real functioning structure. It is in service throughout the rail network of Spain as part

More information

Ultrasonic Multi-Skip Tomography for Pipe Inspection

Ultrasonic Multi-Skip Tomography for Pipe Inspection 18 th World Conference on Non destructive Testing, 16-2 April 212, Durban, South Africa Ultrasonic Multi-Skip Tomography for Pipe Inspection Arno VOLKER 1, Rik VOS 1 Alan HUNTER 1 1 TNO, Stieltjesweg 1,

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

An Introduction to Structural Optimization

An Introduction to Structural Optimization An Introduction to Structural Optimization SOLID MECHANICS AND ITS APPLICATIONS Volume 153 Series Editor: G.M.L. GLADWELL Department of Civil Engineering University of Waterloo Waterloo, Ontario, Canada

More information

Kernel Methods & Support Vector Machines

Kernel Methods & Support Vector Machines & Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector

More information

Fitting Fragility Functions to Structural Analysis Data Using Maximum Likelihood Estimation

Fitting Fragility Functions to Structural Analysis Data Using Maximum Likelihood Estimation Fitting Fragility Functions to Structural Analysis Data Using Maximum Likelihood Estimation 1. Introduction This appendix describes a statistical procedure for fitting fragility functions to structural

More information

PITSCO Math Individualized Prescriptive Lessons (IPLs)

PITSCO Math Individualized Prescriptive Lessons (IPLs) Orientation Integers 10-10 Orientation I 20-10 Speaking Math Define common math vocabulary. Explore the four basic operations and their solutions. Form equations and expressions. 20-20 Place Value Define

More information

Transactions on the Built Environment vol 28, 1997 WIT Press, ISSN

Transactions on the Built Environment vol 28, 1997 WIT Press,   ISSN Shape/size optimization of truss structures using non-probabilistic description of uncertainty Eugenio Barbieri, Carlo Cinquini & Marco Lombard! LWveraz'ry of fawa, DeparfmcMf q/#r%cf%ra7 Mzc/zamcj, fawa,

More information

Expectation and Maximization Algorithm for Estimating Parameters of a Simple Partial Erasure Model

Expectation and Maximization Algorithm for Estimating Parameters of a Simple Partial Erasure Model 608 IEEE TRANSACTIONS ON MAGNETICS, VOL. 39, NO. 1, JANUARY 2003 Expectation and Maximization Algorithm for Estimating Parameters of a Simple Partial Erasure Model Tsai-Sheng Kao and Mu-Huo Cheng Abstract

More information

DRAFT: Analysis of Skew Tensegrity Prisms

DRAFT: Analysis of Skew Tensegrity Prisms DRAFT: Analysis of Skew Tensegrity Prisms Mark Schenk March 6, 2006 Abstract This paper describes the properties of skew tensegrity prisms. By showing that the analytical equilibrium solutions of regular

More information

Chapter 3 Path Optimization

Chapter 3 Path Optimization Chapter 3 Path Optimization Background information on optimization is discussed in this chapter, along with the inequality constraints that are used for the problem. Additionally, the MATLAB program for

More information

CITY AND GUILDS 9210 UNIT 135 MECHANICS OF SOLIDS Level 6 TUTORIAL 15 - FINITE ELEMENT ANALYSIS - PART 1

CITY AND GUILDS 9210 UNIT 135 MECHANICS OF SOLIDS Level 6 TUTORIAL 15 - FINITE ELEMENT ANALYSIS - PART 1 Outcome 1 The learner can: CITY AND GUILDS 9210 UNIT 135 MECHANICS OF SOLIDS Level 6 TUTORIAL 15 - FINITE ELEMENT ANALYSIS - PART 1 Calculate stresses, strain and deflections in a range of components under

More information

Multi-Mesh CFD. Chris Roy Chip Jackson (1 st year PhD student) Aerospace and Ocean Engineering Department Virginia Tech

Multi-Mesh CFD. Chris Roy Chip Jackson (1 st year PhD student) Aerospace and Ocean Engineering Department Virginia Tech Multi-Mesh CFD Chris Roy Chip Jackson (1 st year PhD student) Aerospace and Ocean Engineering Department Virginia Tech cjroy@vt.edu May 21, 2014 CCAS Program Review, Columbus, OH 1 Motivation Automated

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

1314. Estimation of mode shapes expanded from incomplete measurements

1314. Estimation of mode shapes expanded from incomplete measurements 34. Estimation of mode shapes expanded from incomplete measurements Sang-Kyu Rim, Hee-Chang Eun, Eun-Taik Lee 3 Department of Architectural Engineering, Kangwon National University, Samcheok, Korea Corresponding

More information

OPTIMIZATION OF STIFFENED LAMINATED COMPOSITE CYLINDRICAL PANELS IN THE BUCKLING AND POSTBUCKLING ANALYSIS.

OPTIMIZATION OF STIFFENED LAMINATED COMPOSITE CYLINDRICAL PANELS IN THE BUCKLING AND POSTBUCKLING ANALYSIS. OPTIMIZATION OF STIFFENED LAMINATED COMPOSITE CYLINDRICAL PANELS IN THE BUCKLING AND POSTBUCKLING ANALYSIS. A. Korjakin, A.Ivahskov, A. Kovalev Stiffened plates and curved panels are widely used as primary

More information