Optimization Methos for Calculating Design Imprecision y William S. Law Eri K. Antonsson Engineering Design Research Laboratory Division of Engineering an Applie Science California Institute of Technology Pasaena, California Abstract The preliminary esign process is characterize by imprecision: the vagueness of an incomplete esign escription. The Metho of Imprecision uses the mathematics of fuzzy sets to explicitly represent an manipulate imprecise preliminary esign information, enabling the esigner to explore the space of alternative esigns in the context of the esigner an customer's preferences among alternatives. This paper introuces new methos to perform Metho of Imprecision calculations for general non-monotonic esign evaluation functions that aress the practical necessity to minimize the number of function evaluations. These methos utilize optimization an experiment esign. Introuction Evaluation is a ey component of preliminary esign. Evaluating alternatives early in the esign process avois further investment in inferior alternatives, an can allow more alternatives to be consiere. Traitional evaluation tools are of limite value in preliminary esign because they require a precise esign escription. Preliminary esign information is characteristically imprecise: the esign escription is vague an inistinct. The esigner must consier a clou of alternative esigns. A computational tool that consiers iniviual esigns separately provies limite insight into the structure of the clou of alternatives. A more systematic methoology is require: one that explicitly moels imprecision. The Metho of Imprecision [1, 2, 3] uses the mathematics of fuzzy sets to represent an manipulate imprecise preliminary esign information. The Imprecise Design Tool [3], a computational implementation of this methoology, supports preliminary esign ecisions base on imprecise information, an accepts any esign evaluation function as require by the application. Manuscript prepare for submission to the 21st ASME Design Automation Conference, 17-21 September 1995. y KEYWORDS: Design optimization; Inustrial examples, evelopments an perspectives. 1
Practical computational tools for esign must be resource ecient. They must not eman too much of a esigner's time an attention, an they must prouce satisfactory answers without excessive computation. Frequently the cost of evaluating a esign is substantial, an the critical measure of computational eciency is the number of esign evaluations require. This paper presents new methos to calculate imprecision for general non-monotonic evaluation functions that aress the practical necessity to minimize the number of function evaluations. Many of these methos evolve from iscussions with engineers from the Vehicle Structures Computer Aie Design group at the For Motor Company, Dearborn, MI, an in particular Thomas Mathai. The authors gratefully acnowlege their contribution. Denitions an Notation Design alternatives are escribe by a collection of esign variables 1 ; ::; n. These variables nee not be continuous or even orinal: the esign variable \styling" may have the unorere values \conservative", \sporty", an \futuristic". The set of vali values for i is enote X i. The whole set of esign variables forms an n vector,, ~ that uniquely ienties each esign alternative intheesign variable space (DVS). Performance variables p 1 ; :::; p q measure aspects of a esign's performance. Each performance variable p j is ene by a mapping f j such that p j = f j ( ). ~ The mappings f j can be any calculation or proceure to evaluate the performance of a esign, incluing close-form equations, computational algorithms, \blac box" functions, prototype testing, an maret research. The set of vali values for p j is enote Y j. The set of performance variables for each esign alternative forms a q vector, ~p = f( ~ ), ~ that species the performance of a esign. ~ The performance variable space (PVS) encompasses all performances ~p. Imprecise variables may potentially assume any value within a possible range because the esigner oes not now, a priori, the nal value that will emerge from the esign process. Yet even though the esigner is unsure about what value to specify, certain values will be preferre over others. This preference, which may arise objectively (e.g., cost or availability of components) or subjectively (e.g., from experience), is use to quantify the imprecision associate with a esign variable. The preference that the esigner has for values of the esign variable i is represente by a preference function on X i, terme the esign preference: i ( i ):X i![0; 1] R: i ( i ) quanties the esigner's preference for values of i, an is istinct from the customary membership function in a fuzzy set, which quanties the extent to which values belong to the set. The preference that a customer has for values of the performance variable p j is similarly represente by a preference function on Y j, terme the functional requirement: pj (p j ):Y j![0; 1] R: The combine preference of the esigner an customer for a particular esign ~ is represente by anoverall preference o ( ~ ), which is a function of the esign preferences i ( i ), an the functional requirements pj (p j )= pj (f j ( ~ )): o = P, 1 ; :::; n ; p1 ; :::; pq : The combination function P reects the esign or trae-o strategy, which inicates how competing attributes of the esign shoul be trae-o against each other [1]. 2
DVS yj µ D α µ α 2 f j α 1 p α jmin p j α max p j Figure 1. The Level Interval Algorithm. Imprecision Calculations After specifying esign preferences 1 ; :::; n an functional requirements p1 ; :::; p q, an ientifying the appropriate esign strategy, the iniviual i are combine to obtain, the combine esign preference on the DVS. is then inuce onto the PVS, using the extension principle [4]: (~p) = supf ( ) ~ j ~p = f ~ ( )g ~ where sup over the null set is ene to be zero. ( ) ~ is the combine esign preference on the DVS, as istinct from (~p), the combine esign preference inuce onto the PVS. (~p) is obtaine by mapping ( )onto ~ the PVS. Previously, (~p) has been calculate using the Level Interval Algorithm, orlia [2], rst propose by Dong an Wong [5] as the \Fuzzy Weighte Average" algorithm an also calle the \Vertex Metho". The LIA uses M esign preference -cuts D 1 ; :::; D M in the DVS to calculate iniviual inuce {cut intervals [p j min ;p j ] which together ene the max inuce {cuts P in the PVS: D = f ~ 2 DVS j ( ~ ) g = [ 1 min ; 1 max] :::[ n P = f~p 2 PVS j (~p) g = [p 1 min ;p 1 max] :::[p q min ; nmax] min ;p ] qmax where =1; :::; M. For each, the LIA evaluates p j = f j ( ) ~ for the 2 n permutations of -cut en points which correspon to the corners of an n-cube ene by D (there are n esign variables an M-cuts). Figure 1 illustrates how -cuts D in two esign variables 1 an 2 are inuce onto the interval [p j min ;p ]. jmax f j is evaluate at the 2 n = 4 corner points of each D rectangle. It is assume that p an j min p j will occur at these corner max points, an not insie D. This is not true in general: the mapping f j :DVS!Y j an the combination function P must satisfy certain conitions for the LIA to be exact [6]. In practice, these conitions require that f j be monotonic: a severe restriction. 3
i ( i ) iniviual esign preference on X i (esigner), i =1; :::; n ( ) ~ (~p) combine esign preference on the DVS combine esign preference inuce onto the PVS pj (p j ) iniviual functional requirement ony j (customer), j =1; :::; q p (~p) combine functional requirement on the PVS p ( ) ~ o ( ) ~ o (~p) P combine functional requirement mappe bac onto the DVS overall preference on the DVS overall preference on the PVS combination function D -cut in the DVS, within which ( ) ~ ( =1; :::; M ) P -cut in the PVS, within which (~p) ( =1; :::; M ) Optimization The limitations of the LIA stem from the assumption that the extreme values of f j will occur at the corner points of the D n-cube. The algorithm may thus be improve by relaxing this assumption. The problem restate is to n: p j min = minfp j = f j ( ~ ) j ~ 2 D g p j max = maxfp j = f j ( ~ ) j ~ 2 D g Fining extrema within a subspace is a constraine optimization problem. Optimization techniques are ivie into two categories: traitional an stochastic. Traitional methos converge in relatively few function evaluations but are less robust, tening to become stuc in local minima. Stochastic methos such as genetic algorithms are more robust, but require a large number of function evaluations. Where function evaluations are relatively expensive, as is common in esign, traitional optimization methos are preferre. The algorithm utilize here is Powell's metho, which begins as a one at a time search. After each iteration a heuristic etermines whether to replace the irection of maximum ecrease with the net irection move uring the last iteration. This allows minimization own valleys while avoiing linear epenence in the set of search irections [7]. An important feature for a practical computational tool is a means to trae-o the number of function evaluations against accuracy. Such an ajustment enables the esigner to use the same program to obtain quic estimates as well as precise evaluations. This is implemente as a user-specie fractional precision that enes termination criteria for the optimization algorithm. Suppose that it is necessary to incur the minimum number of function evaluations. A fractional precision of 1 woul be specie, creating automatically satise termination criteria, an the optimization woul procee through exactly one iteration of a one at a time search using the maximum step size. The algorithm begins at one corner of the search space D, an checs corners in each of the n irections given by 1 ; :::; n,moving to the minimum each time. It expens n + 1 function evaluations to n each en point, an therefore 2n +2per -cut, as compare to 2 n per -cut for the LIA. This is a substantial improvement, but the -cut interval obtaine by this metho is still only correct if f j is monotonic. If, however, f j is nown to be monotonic, 2n+2 is not the minimum number of function evaluations. The rst pass of the optimization algorithm ienties the irection for each 4
i in which f j increases. Subsequent extrema can then be irectly evaluate, without the nee for searching. Hence where f j is monotonic, n + 2 function evaluations are require for the rst -cut an 2 for each subsequent -cut. Design of Experiments Statistical esign of experiments sees to erive information about a process using as few observations as possible. It has two aims: to separate the eects to be measure from ranom noise, an to moel the process with regression equations. The function f j can be treate as an unnown process, which a shrew optimization metho shoul examine with a few function evaluations before searching for extrema. Statistical esign of experiments is an ecient metho to conuct this preliminary examination. Note that if the process is eterministic, e.g., a computer program, repeate function evaluations will always give the same answer: the output contains no noise. Therefore statistical signicance tests to istinguish the signal are unnecessary. This paper iscusses only the use of experiment esign to moel the function, though statistical signicance tests are a valuable technique for processes subject to noise. Because the proceure is to be encoe in a computer program, avance experiment esign techniques cannot be implemente. Hence only a linear regression moel will be tte to the function. Such a simple moel will aequately approximate the eect of few, if any, esign variables, but that is sucient. The purpose is not to replace the entire function with a sophisticate regression moel, but rather to ientify esign variables with near-linear eects. For each of these variables, the function can then be approximate by a linear equation, shrining the search space by one imension. The function shoul therefore be moele over the search space D. But which D? -cuts with higher are subsets of -cuts with lower. Thus D 1, the -cut with the lowest, contains all the other -cuts. Any regression equations foun to be acceptable over D 1 will be acceptable over all -cuts. Hence only one set of experiments over D 1 is require to explore the entire search space. The Imprecise Design Tool will use a2level experiment esign with a center point to serve as a curvature chec. A full factorial esign woul evaluate the same 2 n corner points of D as the LIA, but since there are n main eects an 1 average to be etermine, only n+1 evaluations are strictly necessary (excluing the center point). A fractional factorial esign, which only evaluates a balance subset of corner points, is more ecient. In reucing a full factorial esign own to a fractional factorial esign, some interactions are unavoiably confoune with other interactions, so that their eects cannot be istinguishe. It is assume that main interactions, ue to a single variable, are more liely than two-way interactions, which are in turn more liely than three-way an higher orer interactions. Since only main eects will be estimate, they must not be confoune with each other. This requires a resolution III (or higher) esign. But it is esirable for main eects also not to be confoune with two-way interactions, an this requires a resolution IV esign [8]. For n = 8, the smallest resolution IV esign is a 2 8,4 fractional factorial esign requiring 16 observations. A resolution III esign woul require 12 observations. Resolution III esigns approach the strictly necessary n + 1 function evaluations. Resolution IV esigns require between 2n an 4n, 4 function evaluations [9]. For n = 8, 16 function evaluations woul be use: 9 evaluations are strictly necessary to estimate the 8 main eects an 1 average, an so there are 7 \reunant" evaluations. But these evaluations are not waste: they allow main eects to be separate from two-way interactions, an they provie 7 extra 5
points to verify the accuracy of the linear regression moel. The number of function evaluations can also be trae-o against accuracy for the esign of experiments. The linear regression equations obtaine replace the function where the approximation is acceptable. The criteria for \acceptable", which etermine how accurately the function is moele, are irectly relate to the user-specie fractional precision use by the optimization algorithm. Thus a single parameter traes-o computational eort against accuracy for both optimization an experiment esign. Relaxing the criteria for moel acceptability minimizes the number of function evaluations. Yet there are essential conitions that still must be satise: if the sign of the graient for a esign variable i is in oubt or if the center point is an extremum, the linear regression equation for i must be rejecte. If the function is benign, a maximum of 4n, 3 evaluations (incluing the center point) will be incurre to obtain the regression equations an up to 2 evaluations will be require for the preicte -cut en points. 4n, 1evaluations excees the n +1evaluations require for a one at a time search, but the avantages are fourfol: 1. Monotonicity is not assume: up to 3n, 5 \reunant" points test for monotonicity an linearity. 2. The center point tests for curvature. 3. The entire ata set is use in estimating each eect, instea of two points. 4. An even istribution of corner points is sample, instea of n + 1 ajacent corners. Discussion Figure 2 shows the role of optimization an experiment esign in the Imprecise Design Tool. The information ow for one performance variable p j = f j ( ) ~ is shown. -cut intervals [p j min ;p j ] can be calculate separately for each Y max j, an then combine in the uppermost preference calculation moule. All Metho of Imprecision calculations that explicitly involve preference occur at this level. Mapping -cuts from the DVS to Y j requires optimization, which searches for the inuce -cut interval [p j min ;p ] that correspons to jmax D. To o so, the experiment esign moule is calle repeately to evaluate the function f j at ierent esigns. ~ The rst time the moule is calle, it conucts a fractional factorial experiment over D 1, the -cut with lowest, an constructs linear regression equations. For subsequent calls, regression equations replace the function f j for any esign variables that are aequately approximate. The fractional precision use by the optimization an experiment esign moules traes-o the number of function evaluations against accuracy. For the two esign alternatives in the turbofan engine esign example in [3], the Imprecise Design Tool using the LIA require 12 an 128 function evaluations. The alternatives ha 4 an 5 -cuts with some coincient en points, especially for the rst alternative. Although there were nominally 8 esign variables for both alternatives, the number of imensions in the search space was 3 an 6. Using only optimization, an without taing avantage of monotonicity, the current version of the Imprecise Design Tool require 12 an 38 function evaluations to obtain the same results. As expecte, optimization has a greater avantage for larger n. 6
DVS µ 1,...,µ n fractional precision δ µ p 1,...,µ p q PVS µ () Preference Calculations µ (p) D α δ Optimization p α jmin p α jmax D α1 δ Experiment Design fj () Imprecise Design Tool f j f () j Figure 2. The Imprecise Design Tool. Conclusion This paper has presente new methos to perform Metho of Imprecision calculations. These methos, which use optimization an experiment esign techniques, provie two important enhancements: 1. Evaluation functions are no longer assume to be monotonic. 2. The number of function evaluations require is substantially reuce. Preliminary esign necessarily eals with imprecise esign escriptions. Traitional evaluation tools lac a systematic methoology to accommoate imprecision an eliver only isconnecte information on iniviual esigns. The Metho of Imprecision uses the mathematics of fuzzy sets to represent an manipulate imprecise preliminary esign information, enabling the esigner to explore an unerstan the space of alternative esigns in the context of the esigner an customer's preferences among alternatives. This paper has presente continuing wor in implementing this methoology in a practical computational esign tool. 7
Acnowlegments This material is base upon wor supporte, in part, by: The National Science Founation uner a NSF Grant No. DDM-9201424. Any opinions, nings, conclusions, or recommenations expresse in this publication are those of the authors an o not necessarily reect the views of the sponsors. References [1] Kevin N. Otto an Eri K. Antonsson. Trae-O Strategies in Engineering Design. Research in Engineering Design, 3(2):87{104, 1991. [2] Kristin L. Woo, Kevin N. Otto, an Eri K. Antonsson. Engineering Design Calculations with Fuzzy Parameters. Fuzzy Sets an Systems, 52(1):1{20, November 1992. [3] William S. Law an Eri K. Antonsson. Incluing Imprecision in Engineering Design Calculations. In Design Theory an Methology { DTM '94, volume DE-68, pages 109{ 114. ASME, September 1994. [4] L. A. Zaeh. Fuzzy sets. Information an Control, 8:338{353, 1965. [5] W. M. Dong an F. S. Wong. Fuzzy weighte averages an implementation of the extension principle. Fuzzy Sets an Systems, 21(2):183{199, February 1987. [6] Kevin N. Otto, Anrew D. Lewis, an Eri K. Antonsson. Approximating -cuts with the Vertex Metho. Fuzzy Sets an Systems, 55(1):43{50, April 1993. [7] P. Aby an M. Dempster. Introuction to Optimization Methos. Chapman an Hall, Lonon, 1974. [8] T. Barer. Quality by Experimental Design. Marcel Deer, Inc., New Yor, 1985. [9] M. Phae. Quality Engineering Using Robust Design. Prentice Hall, Englewoo Clis, NJ, 1989. 8