ROBUST DESIGN. Seminar Report. Doctor of Philosophy. (Aerospace Engineering) SHYAM MOHAN. N. (Roll No ) Under the guidance of

Size: px
Start display at page:

Download "ROBUST DESIGN. Seminar Report. Doctor of Philosophy. (Aerospace Engineering) SHYAM MOHAN. N. (Roll No ) Under the guidance of"

Transcription

1 ROBUST DESIGN Seminar Report Submitted towards partial fulfillment of the requirement for the award of degree of Doctor of Philosophy (Aerospace Engineering) By SHYAM MOHAN. N (Roll No ) Under the guidance of Prof. K. Sudhakar Prof. P. M. Mujumdar Department of Aerospace Engineering, Indian Institute of Technology, Bombay November, 2002

2 ABSTRACT The underlying principles, techniques & methodology of robust design are discussed in detail in this report with a case study presented to appreciate the effectiveness of robust design. The importance of Parameter design & Tolerance design as the major elements in Quality engineering are described. The Quadratic loss functions for different quality characteristics are narrated, highlighting the fraction defective fallacy. The aim of the robust design technique is to minimize the variance of the response and orthogonal arrays are an effective simulation aid to evaluate the relative effects of variation in different parameters on the response with the minimum number of experiments. Statistical techniques like ANOM (analysis of means) and ANOVA (analysis of variance) are the tools for analyzing the data obtained from the orthogonal array based experiments. Using this technique of robust design the quality of a product or process can be improved through minimizing the effect of the causes of variation without eliminating the causes. Fundamental ways of improving the reliability of a product are discussed highlighting the importance of robust design on this. Based on the classification of uncertainties in design, the role of robust design optimization & reliability based design optimization are discussed. The mathematical formulations for these types of optimization strategies are explained. Based on this study, it can be concluded that the robust design methodology based on Taguchi s principles will take care of the entirety of the noise factors which can cause underperformance and failures, but it will be advantageous to do a robust & reliability based design optimization because apart from making the design insensitive to noises, it will enable the designer to predict the reliability of the product. The current research activities in the application of robust design techniques in the aerospace systems are also discussed, one with respect to relaxing manufacturing tolerances on an aircraft nacelle to reduce cost and the other, tackling uncertainties in Mach number in the design optimization of an airfoil for a transport aircraft. 2

3 Table of Contents Page No. 1. Introduction Historical perspective Quality Engineering using Robust Design Quality Engineering Principles Quality Loss Function & The Fraction Defective Fallacy Different types of Quality Loss Function Response Variations & Control Robust Design Technique Classification of Parameters Average Quality Loss due to Noise Factors Exploiting Non-linearity for robust design Tasks to be performed in Robust Design Matrix Experiments using Orthogonal Arrays Steps in Robust Design Identification of control & noise factors Selection of factor levels Factor assignment 26 5 A Case Study Methods of Simulating the variation in noise factors Reliability Improvement Role of S/N Ratios in Reliability improvement Design Optimization under Uncertainty Robust Design Optimization (RDO) and Reliability Based Design Optimization (RBDO) Application of robust design in aerospace systems Conclusion References 62 3

4 List of Figures Page No. Fig-1 General trend of quality definition 11 Fig-2 Fraction defective fallacy 11 Fig-3 Quality loss as a step function & quadratic function 12 Fig-4 Quadratic quality loss 12 Fig-5 Quality loss function for nominal the best type 13 Fig-6 Quality loss function for smaller the better type 13 Fig-7 Quality loss function for larger the better type 14 Fig-8 Nature of variations & control 15 Fig-9 Design block diagram 16 Fig-10 Distribution of quality characteristic 17 Fig-11 Mean shift due to noise effects 17 Fig-12 Exploiting non-linear relation 19 Fig-13 Maximum S/N ratio & the robust point 21 Fig-14 Two level selection 26 Fig-15 Three level selection 26 Fig-16 Schematic diagram of the reduced pressure reactor 28 Fig-17 Plots of S/N ratio vs parameter levels 39 Fig-18 Uncertainty classification 49 Fig-19 Robust design principle 50 Fig-20 Nacelle s eleven key features 55 Fig-21 Surface excrescence at the key manufacturing features 55 4

5 List of Tables Page No. Table-1 L4 orthogonal array with 2 levels 23 Table-2 L8 orthogonal array with 2 levels 23 Table-3 L9 orthogonal array with noise capturing 24 Table-4 Factor level assignment 27 Table-5 Control factors & their levels 30 Table-6 L 18 orthogonal array & factor assignment 31 Table-7 Experimenter s log 32 Table-8 Data on surface defects count 33 Table-9 Thickness & deposition rate data 34 Table-10 S/N ratios from matrix experiments 35 Table-11 Analysis of surface defect data 36 Table-12 Analysis of thickness data 37 Table-13 Analysis of deposition rate data 38 Table-14 Summary of factor effects 40 Table-15 Tolerance synthesis 41 5

6 Acronyms Nomenclature ANOM - Analysis of mean ANOVA - Analysis of Variance NC - Number of Constraints NPR - Number of performances to be robust OBJ - Objective function OA - Orthogonal array Q - Quality QC - Quality control R&D - Research & Development S/N - Signal to Noise Symbols - Tolerance L(y) - Quality loss function m - Mean value Ao - Cost of replacement or repair µ - Mean σ - Standard deviation σ 2 - Variance Σ - Summation d - Design variable x - Random variable R - Response Gi - i ih constraint function 6

7 Acknowledgement The author would like to express his deep and sincere gratitude to Prof. K. Sudhakar and Prof. P.M. Mujumdar of Aerospace Engineering Department for their continuous guidance and support in this seminar work and for the preparation of this report. 7

8 1. Introduction The knowledge of scientific phenomena and past experience with similar product designs and manufacturing processes form the basis of the engineering design activity. However, a number of new decisions related to the particular product must be made regarding product architecture, parameters of the product design, the process architecture and parameters of the manufacturing process. A large amount of engineering effort is consumed in conducting experiments (either with hardware or by simulation) to generate the information needed to guide these decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing cost low and having high-quality products. Robust Design is an engineering methodology for improving productivity during design & development so that high quality products can be produced at low cost. Designing high quality product and processes at low cost is an economic and technological challenge to the engineer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality & cost, called Robust Design, which is capable of 1. Making product performance insensitive to raw material variation, thus allowing the use of lower grade alloys & components in most cases, 2. Making designs robust against manufacturing variation, thus reducing labor & material cost for rework & scrap, 3. Making the design least sensitive to the variation in operating environment, thus improving reliability and reducing operating cost, and 4. Using a new structured development process so that engineering time is used more productively. The Robust Design method uses a mathematical tool called Orthogonal Arrays to study a large number of decision variables with a small number of experiments. It also uses a new measure of quality called signal-to-noise (S/N) ratio to predict the quality from the customer s perspective. [1,2,3] Thus, the most economical product & process design from both manufacturing & customers viewpoint can be accomplished at the smallest, affordable development cost. 8

9 Robust design yield robust product that works as intended regardless of variation in a products manufacturing process, variation resulting from deterioration, variation in operating conditions and variation in the ambient conditions during use. Robust Design can be achieved when the designer understands these potential sources of variation & takes steps to desensitize the product to these potential sources of variations. Robust Design can be achieved by Intelligent Design, by understanding which product/process design parameters are critical to the achievement of a performance characteristics and what are the optimum values to both achieve the performance characteristic & to minimize its variation. Robust Design is based on the principle of optimization in which the objective function is defined as the signal to noise ratio which will help in finding those values of the design parameters at which the response is least sensitive to the different effects of noise factors. [1,2,3] So the fundamental principle of Robust Design is to improve the quality of a product by minimizing the effects of the causes of variations without eliminating the causes. This is achieved by optimizing the product & process designs to make the performance minimally sensitive to the various causes of variations. 1.1 Historical perspective. When Japan began its reconstruction efforts after World War II, it faced an acute shortage of good quality raw materials, high quality manufacturing equipment and skilled engineers. The challenge was to produce high quality products and continue to improve the quality under those circumstances. The task of developing a methodology to meet the challenge was assigned to Dr. Genichi Taguchi, who at that time was a manager in Nippon Telephone & Telegraph Company. Through his research in the 1950s and early 1960s, Dr.Taguchi developed the foundations of Robust Design and validated its basic philosophies by applying them in the development of many products. In recognition of this contribution, he received the Individual Deming Award in 1962, which is one of the highest recognition in the quality field. [3] The Robust Design method can be applied to a wide variety of problems. The application of the method in electronics, automotive products, photography, and many other industries have been an important factor in the rapid industrial growth and the subsequent domination of international markets in these industries by Japan. 9

10 2. Quality Engineering using Robust Design 2.1 Quality Engineering Principles Though Quality can be defined as Conformance to specification and fitness for use etc in the general concept, these definitions do not cover the entire implied meaning of Quality. The Ideal Quality a customer can expect is that the product delivers the target performance each time the product is used, under all intended operating conditions and throughout its intended life, with no harmful side effects. Dr. Taguchi brought out the fallacy in the fraction defective definition for quality, in which the number of defectives based on the principle depicted in Fig-1 was the only concern. As per his theory [2], the measure of quality of a product is in terms of the total loss to society due to functional variation and harmful side effects. Under ideal quality, this loss is equal to zero. Greater the loss, lower the quality. As per this the total cost of a product is the sum of the operating cost including maintenance & inventory, the manufacturing cost, the R & D cost (the time, Laboratory charges, resources etc) and the cost incurred by its breakdown and thereby the losses caused to the society. The product life cycle cost is divided into the cost incurred before sale to the customer and after sale to the customer. Quality engineering is concerned with reducing both of these costs and thus is an interdisciplinary science involving engineering design, manufacturing operations and economics. 2.2 Quality Loss Function & The Fraction Defective Fallacy As per the definition, the Quality Loss Function is the total loss incurred by the society due to failure of the product to deliver the target performance and due to harmful side effects of the product including its operating cost. According to the primitive concepts of quality, the product was certified as good quality if the measured characteristics were within the specification & vice versa. This is shown in figure 1. This means that all products that meet the specifications are equally good. But in reality it is not so. The product whose response is exactly on target gives the best performance. As the product s performance deviate from the target, the quality becomes progressively worse. These two quality philosophies are narrated in fig-2 as 10

11 in one case, the focus is on meeting the target and on other case the focus is on meeting the tolerance. This is the actual case study result [3] on the SONY TV companies of USA & Japan and demonstrates how the Japan made TVs were branded as high quality products by following the principle of focussing the target than focusing the tolerance. General Trend of Quality Definition Reject Accept Reject Bad Good Good Good Good Good Bad Target - Target + Target Fig-1 General Trend of Quality Definition (Focus was meeting the tolerance) (Focus was meeting the target) 0.3 % outside Tolerance limit Sony, Japan produced many more grade A sets & many fewer grade c sets, compared to Sony USA. Average grade of sets produced by Sony, Japan was better, hence the customer s preference. Fig-2 Fraction Defective Fallacy From these it can be realized that the true quality measure should not be based on the step function as shown in Fig-1 but as a quadratic loss function as shown in Fig-3. Here the quality loss function L (Y) is symmetric about the target performance. As the performance deviate from the target the quality loss correspondingly increases. Ao is the cost of replacement or repair and represents the acceptable limit. 11

12 Quality Loss L(y) Ao Target - Target Target + D Fig-3 Quality Loss as a Step Function & Quadratic Function As shown in Fig 4, with the quadratic loss function the Quality Loss is given by the relation L= k ( y-m ) 2, where k is a constant called Quality Loss Coefficient. When y=m, the loss is zero. The loss L(y) increases slowly in the neighborhood of m; but as we go further from m, the loss increases more rapidly. The average quality loss incurred by a customer, who receives a product with y as the quality characteristic will be L(y) Example: The cost of scraping a part is Rs when it deviates ±0.50mm from a target nominal of 2.00mm. L = k( y m) L = Loss associated with attribute y m = Specification target k = constant depending upon the cost and width of the specs 2 Rs100=k( ) 2 K = Rs 400.per mm 2 L = 400 (y-2.0) 2 This represents a paradigm shift in the way in which companies measure the goodness of a product Fig-4 Quadratic Quality Loss 12

13 2.2.1 Different types of Quality Loss Function i) Nominal the-best Type Is applicable where the Quality characteristic y has a finite target value, usually non-zero and the Q loss is symmetric on either side of the target. Eg: Colour density of a TV set. This type is schematically shown in Fig-5 Fig-5 Quality Loss Function for Nominal the Best type ii) Smaller the better Type For quality characteristics which can never take negative values and their ideal value will be zero and as their value increases, performance becomes progressively worse. Eg: Radiation leakage from a microwave oven, Response time of computer, Pollution from automobile etc. Loss L(y) Ao o L(y) = k y 2 K= Ao/ o 2 Fig-6 Quality Loss for Smaller the better type iii) Larger the better type. For Quality characteristics which do not take negative values and zero is their worst value. As their value becomes larger the performance becomes progressively smaller. Their ideal value is infinity and at that point loss=zero. Eg: Bond strength of adhesive. 13

14 Loss L(y) L(y) = k (1/y 2 ) Ao K= Ao/ o 2 o y Fig-7 Quality Loss for Larger the better type 2.3 Response Variations & Control A standard design process is represented schematically below, where the input to the design process is the design variables and when we fix the values for these variables to satisfy the given constraints, the design is said to be completed and the corresponding out put is the response. Design Variables Design Response When a design is completed and fabricated based on the specifications, the performance of the final product may vary from the targeted value due to several reasons. One type of variation can be attributed to the noises related to the fabrication process. In order to control these types of variations the concept of inspection, screening and the online quality control emerged. This is schematically shown below. Apply Tolerances On line QC Design Variables Design Production Due to causes related to fabrication Response Variations 14

15 But there can be some other reasons, other than those related to fabrication, which cause variation in the response. These are noises during operation and the subsystem out puts also can be affected because of these noises. The system performance will be affected also by the changes in the subsystem outputs. Here tight screening for the subsystem components can be applied to improve quality but this method will be prohibitively expensive. This is schematically shown below. Apply Tight Tolerances Tight screening Apply Tolerances On line QC Variations System Operation Performance Noises related to fabrication Noises The different reasons which cause the variation in design parameters and in the manufacturing are termed as noises. The optimum & most efficient way to solve these problems of variation is to make the design & process insensitive to the effect of noises (the causes of variation). This is the underlying principle of Robust Design. Fig-8 shows the different types of variations & control. The on-target, low variation is the most preferred one, which can be obtained using Robust Design Off-target Low variation On-target High variation frequency variable Off-target High variation On-target Low variation Fig-8 Nature of Variation & Control 15

16 3. Robust Design Technique 3.1 Classification of Parameters In the basic design process, a number of parameters can influence the quality characteristic or response of the product. These can be classified into the following three classes and shown below in the block diagram of a product/process design Signal Factors Product / Process Noise Factors Response Fig-9 Design Block Diagram[3] Control Factors The response for the purpose of optimization in the robust design is called the quality characteristic. The different parameters, which can influence this response, are described below. i) Signal Factors : These are parameters set by the user to express the intended value for the response of the product. Example- Speed setting of a fan is a signal factor for specifying the amount of breeze. Steering wheel angle to specify the turning radius of a car. ii) Noise Factors: Parameters which can not be controlled by the designer or parameters whose settings are difficult to control in the field or whose levels are expensive to control are considered as Noise factors. The noise factors cause the response to deviate from the target specified by the signal factor and lead to quality loss. iii) Control Factors: Parameters that can be specified freely by the designer. Designer has to determine best values for these parameters to result in the least sensitivity of the response to the effect of noise factors. The levels of noise factors change from unit to unit, one environment to another and from time to time. Only the statistical characteristics (mean & variance) can be known or specified. The noise factors causes the response to deviate from the target specified by the signal factor and lead to quality loss. 16

17 The Noise factors can again be classified in to three- (a) External: The environment, the load, human error (b) Unit to unit variation: Variation in the manufacturing process (c) Deterioration : As time passes, the performance deteriorates (aging related) The robust design addresses all these different types of Noise factors. For a product or process with multiple functions, different noise factors can affect different quality characteristics. 3.2 Average Quality Loss due to Noise Factors Because of the noise factors, the quality characteristic y of a product varies from unit to unit and from time to time during the usage of the product. The distribution of y resulting from all source of noise is shown below which is a normal distribution with mean µ and variance σ 2 Fig-10. The distribution of y, with mean µ and variance σ 2 µ y Let y be the nominal the best type quality characteristic and m be its target value. Let y1,y2,.yn be n representative measures of the quality characteristic y, taken on a few representative units throughout the design life of the product. Because of the noise effects, the average value of y will be shifted from the target value m as shown below. µ Fig-11 Mean Shift Average Quality Loss = K [ (µ -m) 2 +σ 2 ] 17

18 In this expression of average quality loss, k(µ -m) 2 is resulting from the deviation of the average value of y from the target m, and k σ 2 is resulting from the mean squared deviation of y around its own mean. Between these two components of quality loss it is easier to eliminate the first one. Reducing the second component requires decreasing the variance, which is more difficult. Three methods of reducing variance in the order of increasing cost are given below. (i) Screening out bad products, with the tighter tolerances (ii) Discover the causes of malfunction & eliminate it (iii) Apply robust design method to make the product s performance insensitive to noise factors. 3.3 Exploiting Non-linearity for robust design Usually a product s quality characteristic is related to the various product parameters and noise factors through a complicated non-linear function. It is possible to find many combinations of product parameter values that can give the desired target value of the product s quality characteristic under nominal noise conditions. However due to non-linearity these different product parameter combinations can give quite different variations in the quality characteristic, even when the noise factor variations are the same. The principal goal of robust design is to exploit the non-linearity to find a combination of product parameter values that gives the smallest variation in the value of the quality characteristic around the desired target value.[3,4] Let x = ( x1,x2, xn ) T denote the noise factors and z = ( z1,z2,..zq ) T denote the product parameters (called control factors) whose values can be set by the designer. Suppose the following function describes the dependence of the quality characteristic y on x and z. Y = f ( x, z ) The deviation y of the quality characteristic from the target value caused by the deviation x i of the noise factor, from their respective nominal values can be approximated as ; 18

19 y = ( f/ x I ) x 1 + ( f/ x 2 ) x ( f/ x n ) x n Further, if the deviations of the noise factors are uncorrelated, the variance σ y 2 of y can be expressed in terms of the variance of x I as follows: [2,3,5] σ y 2 = ( f/ x I ) 2 σ x1 2 + ( f/ x 2 ) 2 σ x ( f/ x n ) 2 σ n 2 2 Thus the variance of response σ y is the sum of the products of the variances of the noise factors σ 2 xi and the sensitivity coefficients ( f/ x i ) 2. The sensitivity coefficients are themselves functions of the control factor values. A robust product / process is the one in which the sensitivity coefficients are the smallest. The utilization of non-linearity between the response and the input parameters are shown in Fig What is the best setting for x knowing that x can vary by +/-5? x-5 x x y 40 R x Fig 12 Exploiting the non-linear relation In the above curve the point R is the optimum point at which the dispersion in the response due to the dispersion in the noise factor is the minimum. How to obtain this point for each noise factor is the challenge & beauty of robust design technique. 19

20 3.4 Tasks to be performed in Robust Design A great deal of engineering time is spent in generating about how different design parameters affect performance under different usage conditions. Robust design methodology serves as an amplifier that is it enables an engineer to generate information needed for decision making with less than half the experimental effort. There are two important tasks to be performed in robust design, which can be considered as the main tools used in the process of achieving robustness. i) Measurement of quality during design & development. A leading indicator of quality by which the effects of changing a particular design parameter on the performance can be evaluated. ii) Efficient experimentation to find dependable information about the design parameters, so that design changes during manufacturing & customer use can be avoided. Also the information should be obtained with minimum time & resources. The estimated effects of design parameters must be valid even when other parameters are changed during the subsequent design efforts or when dimensions of related subsystems changed. This can be achieved by employing the signal to noise ratio to measure the quality & orthogonal arrays to study many design parameters simultaneously. The objective Function The Signal to Noise Ratio. Since the robust design is all about keeping the response mean to the target and minimizing the variation in the response, a special type of objective function which captures both the above said objectives, is identified for the robust design process. This is called the Signal to Noise (S/N) ratio and is different for different types of quality characteristics. So robust design is treated as an optimization process in which the objective function is S/N ratio. Signal to Noise ratio is a mathematical formula used to calculate the design robustness. This is the ratio of the signal (mean) over the noise (variability). The larger the S/N ratio, the more robust the performance. The signal to noise ratio for the prominent types of quality characteristics are given below. 20

21 For the nominal the best type: S/N = 10 log 10 ( µ 2 /σ 2 ) µ = The mean, σ 2 = The variance n 2 For smaller the better type : S/N = -10 log [ (1/n) Σ y i ] i=1 This is actually the mean square deviation because the ideal value here is zero. n 2 For Larger the better type: S/N = -10 log [ (1/n) Σ (1/y i )] i=1 This is also the mean square deviation. And here by maximizing the negative of the function, the deviations are minimized. The Fig 13 demonstrates that the when the Signal to Noise ratio is the maximum, the corresponding value of the noise factor will provide the least sensitivity of the response. So at these points the response will be least sensitive to the variations in the noise factors. What is the best setting for x knowing that x can vary by +/-5? x-5 x x+5 y x S/N Fig-13 Maximum S/N ratio & the robust point 21

22 4. Matrix Experiments using Orthogonal Arrays Robust design draws on many ideas from statistical experimental design to plan experiments for obtaining dependable information about variables involved in making engineering decisions. Various types of matrices are used for planning experiments to study several decision variables simultaneously. Among them, robust design makes heavy use of the Orthogonal Arrays.[2,3] Robust design adds a new dimension to statistical experimental design. It explicitly addresses the following concerns faced by all product & process designers. How to reduce economically the variation of a product function in the customers environment. ( Note that achieving a product function consistently on target maximizes customer satisfaction ) How to ensure that decisions found to be optimum during laboratory experiments will prove to be so in manufacturing & in customer environment. In addressing these concerns, robust design uses the mathematical formalism of statistical experimental design. A matrix experiment is a set of experiments, where we change the settings of the various parameters we want to study from one experiment to another. After conducting a matrix experiment, the data from all experiments in the set taken together are analyzed to determine the effects of various parameters. The Analysis of Means ( ANOM ) and the analysis of variance ( ANOVA) are used to interpret the data to find the sensitivity of each parameters of interest. Conducting the matrix experiments using special matrices called Orthogonal Arrays, allows the effect of several parameters to be determined efficiently and is an important technique in robust design. The different levels of the parameters are known as experimental region or the region of interest. Orthogonality is interpreted in a combinatoric sense (ie) for any pair of columns, all combinations of factor levels occur and they occur on equal number of times. This is called the balancing property and it implies orthogonality. So an Orthogonal Array can be defined as a matrix with the columns representing the number of parameters to be studied with their different levels in different 22

23 combinations of experiments and the number of rows equal to the number of experiments. Standard orthogonal arrays are designed and are available. Selection of an orthogonal array for a robust design project is based on the number of degrees of freedom of the experiment in such a way that the number of experiments should be greater than or equal to the number of degrees of freedom. Each parameter with n levels will have n-1 degrees of freedom and overall mean will have one degree of freedom.. In case of a robust design project with 4 parameters and three levels, the total degrees of freedom will be 9. So the selected standard orthogonal array should have atleast 9 rows. An L4 array means an orthogonal array with 4 rows and an L8 array has 8 rows. Some of the standard orthogonal arrays are shown here in Table-1,2 & 3. L4 Array: 3 Variables, 2 Levels Expert. Column Number L8 Array: 7 Variables, 2 Levels Exp Column # Table-1& 2. L4 & L8 Orthogonal arrays with 2 levels Here the orthogonality is interpreted in a combinatoric sense (ie) for any pair of columns all combinations of factor levels occur and they occur on equal number of times. The columns y 1 to y 4 in Table-3 are the different measurements taken on each setting to capture the noise effect. The performance or the responses measured in these matrix 23

24 experiments are analysed using ANOM & ANOVA to find the relative effects of noises on the response. By this method the optimum values of the control factors for which the sensitivity of the response to the effect of noise factors are the minimum can be found out. The detailed steps in robust design are illustrated below. L9 Orthogonal array Column No. Trial No y 1 y 2 y 3 * * * * * * * * * * * * * * * * * * * * * * * * * * * y 4 * * * * * * * * * Table-3 L9 Orthogonal Array with Noise Capturing As mentioned earlier the columns y 1 to y 4 corresponds to the response measurements to capture the effect of noises. These measurements should be planned according to the sources of noises in the given problem. For each trial the average of all these y values are to be taken. (This is explained in the case study in chapter 5.) Typically there are the following two choices regarding noise factors; Improve the quality without controlling or removing the causes of variation, to make the product robust against noise factors. Improve the quality by controlling the noise factors, or recommending certain actions to control the noise factors. In either case, a formal approach to capturing the effect of noise factors is required. So for each experiment different measurements should be taken so as to capture the variations because of the noises. 24

25 4.1 Steps in Robust Design The detailed steps in robust design are explained here.[3]. The experimentation procedure is highlighted below. Three phases of experimental design: 1. Planning Stage areas of concern, objective select response or quality characteristic identify control and noise factors select factor levels select appropriate experimental design identify interactions and assign factors to experimental set-up 2. Conducting Stage Conduct tests as prescribed in experimental set-up 3. Analysis Phase Analyze and interpret results Conduct confirmation experiments 4.2 Identification of control & noise factors As explained earlier control factors are those factors that a manufacturer can control in the design of a product, the design of a process, or during a process. Examples: design variables (widths, heights), assembly method, cooling temperature, cycle time, materials, speeds, feeds. So according to the design problem, the control factors are to be identified. Similarly, the noise factors, which a designer or a manufacturer can not or wishes not to control (because of cost reasons) are also to be identified. Examples: material inconsistencies, supplier variation, machine operators, ambient temperature, ambient humidity. 4.3 Selection of Factor levels A minimum of two levels is necessary to estimate a factor s effect. Continuous factors must be discretized (in preferably equal intervals). Example: Levels of a length parameter: 1 cm, 1.5 cm, 2.0 cm. 25

26 The more levels the more experimental runs that are necessary. The number of levels indicates the resolution of the effect that can be predicted. The advantage of taking a minimum of three points to capture the second order effect is demonstrated in Fig-14, and Fig-15. More the number of levels means more capture of non-linearity, but more the number of experiments & associated efforts. So it is recommended to consider an optimum of three levels. r e s p o n s e Fig-14 Predicted effect 1 2 Actual effect A r e s p o n s e Fig-15 2nd order effect A Fig-14 demonstrates how the non-linearity can be missed if only two levels are taken and in Fig-15, it is shown that considering tree levels will help in a better representation of the actual effect which can be non-linear. Even a tree level combination won t capture the exact relationship. So the entire matrix experimentation may have to be repeated several times to get the most robust design. 4.4 Factor assignment The selected factors are assigned to the different columns of the specific orthogonal array as shown below. 26

27 Factor A Factor C Factor B Factor D L9 Assume each factor has 3 levels Column No. Trial No Table-4 Factor level assignment The OA shown in Table-4 is a 3 level, 4 parameter, & 9 experiment orthogonal array. Here in the above L9 orthogonal array, the first column represent the trials of experiments, starting from 1 to 9. Second to fourth columns are the levels of each parameter or factor A, B,C, & D. This will result in nine experiments with factor level combinations as given in each row. Foe example, the first experiment will be conducted with all the factors A, B,C & D at level 1. For the second experiment, factor A will be at level 1 & all other factors at level 2 and so on. 27

28 5. A Case Study Now a specific case study[3] is addressed here to understand the different steps & aspects of robust design. The robust design is applied on a process of polysilicon deposition on thin wafers. The process set up schematic is shown in fig-16. Saline & Nitrogen gas are introduced at one end and pumped out at the other. The Saline gas pyrolizes and a polysilicon layer is deposited on top of the oxide layer on the wafers. Two carriers each carrying 25 wafers can be placed inside the reactor at a time so that polysilicon is simultaneously deposited on 50 wafers. The problems observed were (i) too many surface defects and (ii) too large a thickness variation. So robust design methodology is adopted to improve the performance or quality of the process. The objective here is to achieve a uniform thickness & minimize the surface defects. Based on the expertise, the non uniform thickness & surface defects are caused by The variations in the parameters involved in the chemical reaction associated with the deposition process Concentration gradient along the length of the reactor. Flow pattern ( direction & speed ) of the gases need not be the same in all positions Temperature variation along the length To Capture these effects, test wafers are positioned at 3, 23 & 48 along the length (remaining 47 dummy wafers) and to capture the effect of noise variation across the wafer, the thickness & surface defects are measured at three different points on each wafer: top, middle & bottom. So nine measurements of thickness & surface defects for each combination of control factor setting in the matrix experiment. Fig-16 28

29 Quality Characteristics here are Polysilicon thickness and the surface defects Specification: Thickness should be less than +/- 8 % of the target Surface defect count should be less than 10 per sq.cm The economics of the manufacturing process is determined by the throughput as well as the quality of the product produced. So along with the quality characteristic a throughput characteristic, which is the deposition rate here also should be studied. The different signal to noise ratios are identified for the different quality characteristics as given below. Thickness data is nominal the best type: Therefore the S/N ratio = η = 10 log 10 (µ 2 /σ 2 ) where µ is the mean and σ 2 is the variance. Defect count is smaller the better type : Therefore S/N ratio = η = -10 log 10 (mean square defects) S/N for the deposition rate in decibel scale = η = 10 log 10 r 2, where r is the observed deposition rate in angstroms. The goal in optimization for thickness is to minimize variance while keeping the mean on target. This is a constrained optimization problem, which can be very difficult to solve. When a scaling factor ( a factor that increases thickness proportionally at all points on the wafers) exist, the problem can be simplified greatly. Here the deposition time is a scaling factor (ie) thickness = deposition rate x deposition time The deposition rate may vary from one wafer to next, or from one position to another, due to noise factors. However the thickness at any point is proportional to the deposition time. So maximize the signal to noise ratio and adjust the deposition time so that mean thickness is on target. The different control factors and their levels are shown in Table- 5 29

30 Table-5 So the total degrees of freedom = 6 x = 13. So the orthogonal array should have a minimum of 13 rows. Correspondingly a L18 is selected because for three level testing L18 is the next available array which is greater than 13. ( Standard orthogonal arrays for different number of parameters 7 different level are already developed and available in literature.[2,3] The L18 orthogonal array is shown in table-6 and the parameters with the assigned levels are shown as experimenter s log in table-7. The matrix experiment results on surface defects count is tabulated in table-8, and the thickness measurements are tabulated in table-9. 30

31 Table-6 L 18 Orthogonal Array and factor assignment [3] 31

32 Table-7 Experimenter s log. 32

33 Table-8 Data on Surface Defect count 33

34 Table-9 Thickness & Deposition rate data 34

35 Table-10 Mean thickness of 18 exp. Varies between 1958 to

36 Table-11 Analysis of surface defects data 36

37 Table-12 Analysis of thickness data 37

38 Table-13 Analysis of deposition rate data 38

39 Fig-17 Plots of S/N ratio vs parameter levels [3] 39

40 Table-14 Summary of factor effects [3] 40

41 Table-15 Results of verification experiment 41

42 From table-10, which gives the variation in mean thickness, surface defect count and the deposition rate, it can be observed that the mean thickness of 18 experiments varies between 1958 to 5965 Angstrom, where the targeted value is 3600 Angstrom. Now in order to evaluate the relative effects of variations in different parameters on the performances the analysis of variance (ANOVA) is performed. A better feel or a realistic feel incorporating the error variance for the relative effects of the different factors can be obtained by the decomposition of variance, which is commonly called as analysis of variance. Here Total sum of squares = grand total sum of squares sum of squares due to mean. n n Σ (η i - m) 2 = Σ η 2 i - nm 2 i=1 i=1 Sum of squares due to factor A= total squared deviation of effect of factor A from the overall mean = 6 (ma1-m) (ma2-m) (ma3-m) 2 ANOVA will generate the variance ratios F for different factors. Larger value of F means the effect of factor is large compared to the error variance. If F less than 1, factor effect is small and can be neglected. If F>2, means factor is not quite small. If F>4, factor effect is quite large. 2 Error Variance, σ e = sum of squares due to error/ degrees of freedom for error 2 Variance of the effect of each factor level in this case = (1/6) σ e So width of 2 σ confidence interval for each estimated effect is +/- 2 σ e The results of ANOVA on surface defects data, thickness data & deposition rate data are given in table 11, 12 & 13. The signal to noise ratio for the different factors are graphically shown in fig 17 and the entire summary of factor effects are tabulated in table 14. Inference: Based on the results of ANOVA & the signal to noise ratio pattern, it can inferred that Deposition Temperature has the largest effect on all characteristics. From A2 to A1 η can be improved by (-50.10) = 26 db. Equivalent to 20 fold 42

43 reduction in rms surface defect count. On thickness uniformity only 0.21 db, but a reduction in deposition rate by 5.4 db, a 2 fold reduction For E & F the optimum settings are obviously E2 & F2. However the factors A through D, the direction in which the Q characteristics (the surface defects & thickness uniformity ) improve tend to reduce the deposition rate. So, a trade off between quality loss and productivity must be made, in choosing the optimum levels. In the case study A2 is changed to A1. So the selected combination is A1 B2C1 D3E2F2. And the verification experiment can be done with this selected parameter level combinations and can be compared with the predicted value. The results of the verification experiments are tabulated in table-15. From the orthogonal array experiments it can be found out that there are certain parameters whose variation do not have any significant effect on changing the response. So the tolerances on these parameters can be relaxed to gain in the cost of fabrication. This method is known as the tolerance design. So the parameter design & the tolerance design together will help in result in a robust product in a lesser cost. So the robust design method advocate a 3 step design philosophy as shown below to achieve a robust, cost effective and reliable product. Step 1. System Design concept design and synthesis innovation and creativity Step 2. Parameter Design parameter sizing to ensure robustness to variations Step 3. Tolerance Design establish product and process tolerances to minimize costs Optimization of a process or a product need not be completed in a single matrix experiment. Several matrix experiments may have to be completed may have to be conducted in sequence before completing a product or process robust design. 43

44 6. Methods of simulating the variation in noise factors In analysing the effect of variation in noise factors on the response, it is very important to correctly simulate the variations in the noise factors. There are three different methods of evaluating the mean and variance of a product response, resulting from variations from many noise factors. They are Monte Carlo simulation, Taylor series expansion and orthogonal array based simulations. Monte Carlo Simulation In this method a random number generator is used to simulate a large number of combinations of the noise factors called testing conditions. The value of the response is computed for each testing conditions and the mean and variance of the response are then calculated. For obtaining accurate estimate of mean & variance, the Monte Carlo method requires evaluation of the response under a large number of testing conditions. This can be very expensive, especially if we also want to compare many combinations of control factor levels. Taylor Series Expansion In this method, the mean response is estimated by setting each noise factor equal to its nominal value. To estimate the variance of the response, the derivatives of the response with respect to each noise factor is found out.[3,4] Let R denote the response and σ 2 1, σ 2 2 2, σ n denote the variance of n noise factors. The variance of R is then computed by the formula: σ R 2 n = Σ ( R / x i ) 2 2 σ i, where xi is the i th noise factor. i=1 The above equation based on first order Taylor series expansion, gives quite accurate estimates of variance when the correlations among the noise factors are negligible and the tolerances are small, so that interactions among the noise factors and the higher order terms are negligible. Otherwise higher order Taylor series expansion must be used, which 44

45 makes the formula for evaluating the response quite complicated and computationally expensive. Therefore this method will not always give an accurate estimate of variance. Orthogonal array based simulation In this method proposed by Dr. Taguchi, orthogonal arrays are used to sample the domain of noise factors. For each noise variable we take either two or three levels. 2 Suppose µ i and σ i are the mean & variance respectively for the noise variable x i. When two levels are taken, µ i - σ i and µ i + σ i are chosen. Note that the mean and 2 variance of these two levels are µ i and σ i, respectively. Similarly when three levels are taken, µ i - { (3/2)}σ i, µ i and µ i + { (3/2)} σ i are chosen. The details of this method is already covered in the previous chapters The advantage of this method over the Monte Carlo method is that it needs a much smaller (order of magnitude smaller) number of testing conditions; yet the accuracy will be excellent. The orthogonal array based simulation gives common testing conditions for comparing two or more combinations of control factor settings. Further when interactions and correlations among the noise factors are strong, the orthogonal array based simulation gives a more accurate estimates of mean & variance compared to Taylor series expansion. 45

46 7. Reliability Improvement There are three fundamental ways of improving the reliability of a product during the design stage: (1) reduce the sensitivity of product s function to the variation in product parameters, (2) reduce the variation in product parameters and (3) provide redundancy. The first approach is the parameter design part of robust design process. The second approach is analogous to the tolerance design and it typically involves more expensive components and manufacturing processes. Thus this approach should be considered only after sensitivity has been minimized. The third approach is used when the cost of failure of the product is high compared to the cost of providing redundant components or even the whole product. 7.1 Role of S/N Ratios in Reliability improvement Reliability characterization refers to building a statistical model for the failure times of the product. Log-normal and Weibull distributions are commonly used for modeling the failure times. Reliability improvement means changing the product design, including the settings of the control factors, so that the time to failure increases. For improving a product s reliability, the appropriate quality characteristics for the product should be identified for minimizing their sensitivity noise. This automatically increases the product s life. The following example clarifies the relationship between the life of a product and sensitivity to noise factors. Consider an electrical circuit whose output voltage, y, is a critical characteristic. If it deviates too far from the target, the circuit s function fails. Suppose the variation in the resister R, plays a key role in the variation of y. Also suppose the resistance R is sensitive to the environmental temperature and that the resistance increases at a certain rate with aging. During the use of the circuit, the ambient temperature may go too high or too low, or sufficient time may pass leading to a large deviation in R. Consequently the characteristic y would go outside the limits and the products would fail. Now if the nominal values of appropriate control factors are changed so that y is much less sensitive 46

47 to variation in R, then for the same ambient temperature faced by the circuit, and for the same rate of change of R due to aging, we would get longer life out of that circuit. Sensitivity of the voltage y to the noise factors is measured by the S/N ratio. In the process of improving the S/N ratio only temperature is used as noise factor. Reducing sensitivity to temperature means reducing sensitivity to variation in R and, hence reducing sensitivity to the aging of R also. Thus by appropriate choice of testing conditions (noise factor setting), the robust design helps in increasing the life & thus the reliability of the product. 47

48 8. Design Optimization under Uncertainty Variations are the biggest challenge in the design optimization process and they are the biggest enemy of quality. The variation can be of different types. The sources of variation can also be considered as uncertainties. Uncertainty is inevitable in design & development. Some of the sources of uncertainty are listed below. [7] i) Scenarios & assumptions ii) Lack of confidence in modeling iii) Experimental data iv) Variation of physical properties v) Changing operating environment vi) Variation related to fabrication In Robust Design methodology, all these causes are identified as noises. Robust optimization results in the design which performs optimally under the variable (or uncertain) conditions over the entire lifetime of the design [11]. Strictly speaking the robust design approach of Dr Taguchi covers the entire aspects of uncertainty and will help in increasing the reliability of the product as explained in chapter VII. It is up to the imagination of the designer to identify the correct control factors & noise factors to capture the effects of uncertainty in the simulation process based on orthogonal arrays. But in pursuit of further research, the robust design is treated with the different goals (though the entirety of these goals are covered as the single objective in the quality engineering concept as maximizing quality & minimizing cost). The three goals of robust design are identified [12,13] as: 1) Identify designs that minimize the variability of performance under uncertain conditions. 2) Provide best overall performance over the entire life time of the product 3) Mitigate the detrimental effects of worst-case performance. Choosing a design with the best worst case performance. The Reliability Based Design is based on the estimation of probability distribution of a system response from the known probability distributions of the random variables in 48

49 a system [8]. In these methods the constraint functions are converted to probabilistic constraints Robust Design Optimization (RDO) and Reliability Based Design Optimization (RBDO) To appreciate the difference between RDO & RBDO, a proper understanding of the classification of uncertainties encountered in a product s life is depicted below [6,12]. Impact of events Performance loss Catastrophe Cost benefit analysis Robust Design & optimization Risk analysis Reliability based Design & optimization Everyday fluctuations Frequency of events Extreme events Fig- 18 Uncertainty Classification Robust optimization techniques account for the impact of everyday fluctuations of parameters on the overall design performance, assuming that no catastrophic failures occurs. Here the primary objective is to improve the quality of a product through minimizing the effect of the causes of variation without eliminating the causes. The robust design philosophy is narrated in fig

50 Bias Probability Distribution Target range Quality Distribution Performance R σ µ σ Fig-19 Robust Design Principle Mathematical formulation of robust design problem The conventional optimization model is defined as Minimize OBJ (d) s.t Gi (d) 0, i = 1,2,.NC d L d d U where OBJ is the objective function, G i is the i th constraint function, NC is the number of constraints, d is the design variable vector, d L & d U are the lower & upper bounds of d. In robust design, the objective is to keep the mean on target & minimize the variation. So the mean & standard deviation of the response will constitute the objective function. So the formulation will be Minimize OBJ [ µ R, σ R ] s.t G i (µ ) + k σ Gi 0, i = 1,2,.NC d L d d U where µ R, σ R are the mean & standard deviation of the response R, Gi (µ ) and σ Gi are the mean & standard deviation respectively of the i th constraint function, k is the 50

51 penalty function decided by the designer [5], d is the design variable vector, d L & d U are the lower & upper bounds of d. Here the objective function takes care of the signal & noise factors and the constrained functions are modified such that the allowed variation in them are limited by the sigma bounds. If orthogonal array based simulations are used, the robust design which minimize the variability of performance under uncertain (manufacturing & operation) conditions and robust design which provide the best overall performance over the entire life time (see chapter 7) will have the same formulation as above. If orthogonal arrays & signal to noise ratios are not used and the response variances are computed from the known variances of design parameters, then the objective function will be as shown below. NPR OBJ [ µ R, σ R ] = Σ [ w 1j (µ Rj R j t ) 2 + w 2j σ 2 Rj ] J=1 s.t G i (µ ) + k σ Gi 0, i = 1,2,.NC d L d d U Where, w 1j is the weight parameter for mean on target, w 2j that for the j th performance to be robust, µ Rj and σ Rj the mean and standard deviation of the j th t performance, R j is the target value of the j th performance and NPR is the number of performances to be robust. And for the robust design for best overall performance over the entire life time [10,12] the objective function is based on the joint probability density function of the random variable x. According to the theory of probability & statistics, integral of the probability density function will give the probability and when this is multiplied by the performance function, the expected value corresponding to that probability will be obtained. The expression given below is based on this theory. 51

52 NPR OBJ ( d, x ) = x Σ w j R j (d,x) f x (x) dx J=1 s.t G i (µ ) + k σ Gi 0, i = 1,2,.NC d L d d U Where f x (x) is the joint probability density function of the random variable x, R j (d,x) is the j th performance function to be minimized and w j is the weight parameter for the j th performance to be robust. Mathematical formulation of reliability based design (RBDO) problem The RBDO problems, the objective is to maximize expected system performance while satisfying constraints that ensure reliable operation. Because the system parameters are not necessarily deterministic, the objective function & constraints must be stated probabilistically. For example RBDO can determine the manufacturing tolerance required to achieve a target product reliability because the method considers the manufacturing uncertainties, such as dimensional tolerance as probabilistic constraints.[8] RBDO will ensure proper levels of safety & reliability for the system designed. The mathematical formulation for RBDO is shown below. Minimize s.t OBJ (d) P { ( Gi (d ) c } CFL i, i = 1,2,.NC d L d d U where CFLi is the confidence level associated with the i th constraint, P denotes the probability, Gi (d ) is the i th constraint function and c is the limiting value. The following example [8] will clear the concept of probabilistic constraint. P ( stress 1 σ y ) 99.0 %, where σ y is the yield stress. (ie) Since there are some uncertainty in the material properties, instead of stating the constraint as, stress 1 σ y, it is stated as the probability of stress 1 σ y, is greater than or equal to 99.0%. 52

53 Robust & Reliability Based Design. When the objective function is based on the robust design principle (with mean & standard deviation of the response), focussing on making the response insensitive to the variations in the design variables and the constraints are modified to probabilistic constraints with the assigned probability of each constraint function, the result is a Robust & Reliability Based Design (RRBDO) [9,13] The mathematical formulation of such a method is given below. Minimize OBJ [ µ R, σ R ] s.t P { ( Gi (d ) c } P oi, i = 1,2,.NC d L d d U where NC is the number of constraints and the objective function is defined as NPR OBJ [ µ R, σ R ] = Σ [ w 1j (µ Rj R j t ) 2 + w 2j σ 2 Rj ] J=1 The different parameters in the above definition are already explained in the formulation for robust design. This approach will yield a design whose response is insensitive to the effects of noises & whose reliability can be predicted based on the reliabilities apportioned to the different constraints. 53

54 9. Application of Robust Design in Aerospace Systems The principles of robust design are being effectively used in the aerospace design field, for tackling uncertainties related to manufacturing & operation. Two such problems are discussed here from published literature [12,14], to highlight the use of robust design concepts in analyzing the sensitivity of manufacturing tolerances on the performance and tackling the operational uncertainty. Parametric Optimization of Manufacturing Tolerances at the aircraft surface[14] This study was aimed at reducing the aircraft cost by relaxing manufacturing tolerances. Conventionally aircraft surface smoothness requirements have been aerodynamically driven with tighter manufacturing tolerances to minimize drag. But this will drive the cost high. So in this research work[14], a strategy to reduce the aircraft cost through manufacturing tolerance relaxation at the wetted surface is investigated. For this a preliminary study has been conducted on eleven key manufacturing features on the surface assembly of an isolated nacelle. The manufacturing tolerance allocation for aerodynamic surfaces at the assembly joints are generated from the specifications laid down by aerodynamicists to minimize aircraft parasite drag, that is to reduce fuel burn. One of the reasons for parasite drag increase is the degradation of the surface smoothness qualities by, for example, the discrete roughness on the component parts and at their subassembly joints. These are seen as aerodynamic defects, collectively termed as one of the excrescence effects, typically, i) mismatches (steps etc.) ii) gaps, iii) contour deviation and iv) fastners flushness (rivets, etc) on the wetted surface. Excrescence drag arising out of these aerodynamic defects is of a considerably lower order of magnitude than the total drag of the aircraft. With today s manufacturing standards, with proper tolerance allocation, the excrescence drag due to surface roughness can be reduced to rather small but significant values. But this will have an implication on cost. As a remedial measure a tolerance relaxation tradeoff 54

55 study between drag increase (loss of quality function) and manufacturing cost reduction (gain) was conducted. The main components of the nacelle, along with the 11 key features affecting excrescence drag, are shown below. Tolerance synthesis on each of these features is done with relaxing the tolerances and estimating the corresponding drag increase using CFD. Fig-20 Nacelle s 11 key features The four types of surface excrescence at the key manufacturing features are explained & shown below [14]. [14] Fig-21 Surface excrescence at the key manufacturing features 55

56 The percentage cost saving in each case is also evaluated using the cost model. The tolerance allocation at each feature, with the existing limit & relaxed optimum limit (with % increase), the corresponding % drag increase and savings as percent of nacelle cost are tabulated in Table-15 below[14]. Table-15 Tolerance Synthesis The results show that feature by feature percentage changes for one nacelle with a drag coefficient increment of 0.824% and a cost reduction of 2.26% on the nacelle cost. This will result to 0.421% overall reduction in DOC (Direct Operating Cost) of the transport aircraft. In this work, the effect of relaxing the manufacturing tolerances at the eleven selected locations on the nacelle, on the performance is studied through the estimate of corresponding drag increases and estimated the cost saving resulting by relaxing such tolerances. Further research work is planned by the same group to extend the study to wing and fuselage. 56

OPTIMISATION OF PIN FIN HEAT SINK USING TAGUCHI METHOD

OPTIMISATION OF PIN FIN HEAT SINK USING TAGUCHI METHOD CHAPTER - 5 OPTIMISATION OF PIN FIN HEAT SINK USING TAGUCHI METHOD The ever-increasing demand to lower the production costs due to increased competition has prompted engineers to look for rigorous methods

More information

CHAPTER 4. OPTIMIZATION OF PROCESS PARAMETER OF TURNING Al-SiC p (10P) MMC USING TAGUCHI METHOD (SINGLE OBJECTIVE)

CHAPTER 4. OPTIMIZATION OF PROCESS PARAMETER OF TURNING Al-SiC p (10P) MMC USING TAGUCHI METHOD (SINGLE OBJECTIVE) 55 CHAPTER 4 OPTIMIZATION OF PROCESS PARAMETER OF TURNING Al-SiC p (0P) MMC USING TAGUCHI METHOD (SINGLE OBJECTIVE) 4. INTRODUCTION This chapter presents the Taguchi approach to optimize the process parameters

More information

LOCATION AND DISPERSION EFFECTS IN SINGLE-RESPONSE SYSTEM DATA FROM TAGUCHI ORTHOGONAL EXPERIMENTATION

LOCATION AND DISPERSION EFFECTS IN SINGLE-RESPONSE SYSTEM DATA FROM TAGUCHI ORTHOGONAL EXPERIMENTATION Proceedings of the International Conference on Manufacturing Systems ICMaS Vol. 4, 009, ISSN 184-3183 University POLITEHNICA of Bucharest, Machine and Manufacturing Systems Department Bucharest, Romania

More information

Robust Design Methodology of Topologically optimized components under the effect of uncertainties

Robust Design Methodology of Topologically optimized components under the effect of uncertainties Robust Design Methodology of Topologically optimized components under the effect of uncertainties Joshua Amrith Raj and Arshad Javed Department of Mechanical Engineering, BITS-Pilani Hyderabad Campus,

More information

Development of a tool for the easy determination of control factor interaction in the Design of Experiments and the Taguchi Methods

Development of a tool for the easy determination of control factor interaction in the Design of Experiments and the Taguchi Methods Development of a tool for the easy determination of control factor interaction in the Design of Experiments and the Taguchi Methods IKUO TANABE Department of Mechanical Engineering, Nagaoka University

More information

CHAPTER 5 SINGLE OBJECTIVE OPTIMIZATION OF SURFACE ROUGHNESS IN TURNING OPERATION OF AISI 1045 STEEL THROUGH TAGUCHI S METHOD

CHAPTER 5 SINGLE OBJECTIVE OPTIMIZATION OF SURFACE ROUGHNESS IN TURNING OPERATION OF AISI 1045 STEEL THROUGH TAGUCHI S METHOD CHAPTER 5 SINGLE OBJECTIVE OPTIMIZATION OF SURFACE ROUGHNESS IN TURNING OPERATION OF AISI 1045 STEEL THROUGH TAGUCHI S METHOD In the present machine edge, surface roughness on the job is one of the primary

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

HOW TO PROVE AND ASSESS CONFORMITY OF GUM-SUPPORTING SOFTWARE PRODUCTS

HOW TO PROVE AND ASSESS CONFORMITY OF GUM-SUPPORTING SOFTWARE PRODUCTS XX IMEKO World Congress Metrology for Green Growth September 9-14, 2012, Busan, Republic of Korea HOW TO PROVE AND ASSESS CONFORMITY OF GUM-SUPPORTING SOFTWARE PRODUCTS N. Greif, H. Schrepf Physikalisch-Technische

More information

Optimization of Process Parameter for Surface Roughness in Drilling of Spheroidal Graphite (SG 500/7) Material

Optimization of Process Parameter for Surface Roughness in Drilling of Spheroidal Graphite (SG 500/7) Material Optimization of Process Parameter for Surface Roughness in ing of Spheroidal Graphite (SG 500/7) Prashant Chavan 1, Sagar Jadhav 2 Department of Mechanical Engineering, Adarsh Institute of Technology and

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY 23 CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY 3.1 DESIGN OF EXPERIMENTS Design of experiments is a systematic approach for investigation of a system or process. A series

More information

Parametric Optimization of Energy Loss of a Spillway using Taguchi Method

Parametric Optimization of Energy Loss of a Spillway using Taguchi Method Parametric Optimization of Energy Loss of a Spillway using Taguchi Method Mohammed Shihab Patel Department of Civil Engineering Shree L R Tiwari College of Engineering Thane, Maharashtra, India Arif Upletawala

More information

Error Analysis, Statistics and Graphing

Error Analysis, Statistics and Graphing Error Analysis, Statistics and Graphing This semester, most of labs we require us to calculate a numerical answer based on the data we obtain. A hard question to answer in most cases is how good is your

More information

Data analysis using Microsoft Excel

Data analysis using Microsoft Excel Introduction to Statistics Statistics may be defined as the science of collection, organization presentation analysis and interpretation of numerical data from the logical analysis. 1.Collection of Data

More information

Experimental Investigation of Material Removal Rate in CNC TC Using Taguchi Approach

Experimental Investigation of Material Removal Rate in CNC TC Using Taguchi Approach February 05, Volume, Issue JETIR (ISSN-49-56) Experimental Investigation of Material Removal Rate in CNC TC Using Taguchi Approach Mihir Thakorbhai Patel Lecturer, Mechanical Engineering Department, B.

More information

Lecture: Simulation. of Manufacturing Systems. Sivakumar AI. Simulation. SMA6304 M2 ---Factory Planning and scheduling. Simulation - A Predictive Tool

Lecture: Simulation. of Manufacturing Systems. Sivakumar AI. Simulation. SMA6304 M2 ---Factory Planning and scheduling. Simulation - A Predictive Tool SMA6304 M2 ---Factory Planning and scheduling Lecture Discrete Event of Manufacturing Systems Simulation Sivakumar AI Lecture: 12 copyright 2002 Sivakumar 1 Simulation Simulation - A Predictive Tool Next

More information

Hideki SAKAMOTO 1 Ikuo TANABE 2 Satoshi TAKAHASHI 3

Hideki SAKAMOTO 1 Ikuo TANABE 2 Satoshi TAKAHASHI 3 Journal of Machine Engineering, Vol. 14, No. 2, 2014 Taguchi methods, production, management, optimum condition, innovation Hideki SAKAMOTO 1 Ikuo TANABE 2 Satoshi TAKAHASHI 3 DEVELOPMENT OF PERFECTLY

More information

Robust Design: Experiments for Better Products

Robust Design: Experiments for Better Products Robust Design: Experiments for Better Products Taguchi Techniques Robust Design and Quality in the Product Development Process Planning Planning Concept Concept Development Development System-Level System-Level

More information

Performance Estimation and Regularization. Kasthuri Kannan, PhD. Machine Learning, Spring 2018

Performance Estimation and Regularization. Kasthuri Kannan, PhD. Machine Learning, Spring 2018 Performance Estimation and Regularization Kasthuri Kannan, PhD. Machine Learning, Spring 2018 Bias- Variance Tradeoff Fundamental to machine learning approaches Bias- Variance Tradeoff Error due to Bias:

More information

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA Chapter 1 : BioMath: Transformation of Graphs Use the results in part (a) to identify the vertex of the parabola. c. Find a vertical line on your graph paper so that when you fold the paper, the left portion

More information

Introduction to Exploratory Data Analysis

Introduction to Exploratory Data Analysis Introduction to Exploratory Data Analysis Ref: NIST/SEMATECH e-handbook of Statistical Methods http://www.itl.nist.gov/div898/handbook/index.htm The original work in Exploratory Data Analysis (EDA) was

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

THE QUALITY LOSS FUNCTION

THE QUALITY LOSS FUNCTION THE QUALITY LOSS FUNCTION Quality ultimate system performance measure Variability relates to quality Variability increases quality lost Can this loss be measured? The concept % defective has been widely

More information

CHAPTER 3 SIMULATION TOOLS AND

CHAPTER 3 SIMULATION TOOLS AND CHAPTER 3 SIMULATION TOOLS AND Simulation tools used in this simulation project come mainly from Integrated Systems Engineering (ISE) and SYNOPSYS and are employed in different areas of study in the simulation

More information

IMECE FUNCTIONAL INTERFACE-BASED ASSEMBLY MODELING

IMECE FUNCTIONAL INTERFACE-BASED ASSEMBLY MODELING Proceedings of IMECE2005 2005 ASME International Mechanical Engineering Congress and Exposition November 5-11, 2005, Orlando, Florida USA IMECE2005-79945 FUNCTIONAL INTERFACE-BASED ASSEMBLY MODELING James

More information

DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES

DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES EXPERIMENTAL WORK PART I CHAPTER 6 DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES The evaluation of models built using statistical in conjunction with various feature subset

More information

Design of Experiments

Design of Experiments Seite 1 von 1 Design of Experiments Module Overview In this module, you learn how to create design matrices, screen factors, and perform regression analysis and Monte Carlo simulation using Mathcad. Objectives

More information

Simulation Supported POD Methodology and Validation for Automated Eddy Current Procedures

Simulation Supported POD Methodology and Validation for Automated Eddy Current Procedures 4th International Symposium on NDT in Aerospace 2012 - Th.1.A.1 Simulation Supported POD Methodology and Validation for Automated Eddy Current Procedures Anders ROSELL, Gert PERSSON Volvo Aero Corporation,

More information

Getting to Know Your Data

Getting to Know Your Data Chapter 2 Getting to Know Your Data 2.1 Exercises 1. Give three additional commonly used statistical measures (i.e., not illustrated in this chapter) for the characterization of data dispersion, and discuss

More information

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation Unit 5 SIMULATION THEORY Lesson 39 Learning objective: To learn random number generation. Methods of simulation. Monte Carlo method of simulation You ve already read basics of simulation now I will be

More information

Optimization of Process Parameters of CNC Milling

Optimization of Process Parameters of CNC Milling Optimization of Process Parameters of CNC Milling Malay, Kishan Gupta, JaideepGangwar, Hasrat Nawaz Khan, Nitya Prakash Sharma, Adhirath Mandal, Sudhir Kumar, RohitGarg Department of Mechanical Engineering,

More information

A STRUCTURAL OPTIMIZATION METHODOLOGY USING THE INDEPENDENCE AXIOM

A STRUCTURAL OPTIMIZATION METHODOLOGY USING THE INDEPENDENCE AXIOM Proceedings of ICAD Cambridge, MA June -3, ICAD A STRUCTURAL OPTIMIZATION METHODOLOGY USING THE INDEPENDENCE AXIOM Kwang Won Lee leekw3@yahoo.com Research Center Daewoo Motor Company 99 Cheongchon-Dong

More information

Bootstrapping Method for 14 June 2016 R. Russell Rhinehart. Bootstrapping

Bootstrapping Method for  14 June 2016 R. Russell Rhinehart. Bootstrapping Bootstrapping Method for www.r3eda.com 14 June 2016 R. Russell Rhinehart Bootstrapping This is extracted from the book, Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation,

More information

Module - 5. Robust Design Strategies

Module - 5. Robust Design Strategies Module - 5 Robust Strategies Variation in performance occurs mainly due to control factor and noise factors (uncontrollable). While DOE can identify the influential control factors which can indeed be

More information

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview Chapter 888 Introduction This procedure generates D-optimal designs for multi-factor experiments with both quantitative and qualitative factors. The factors can have a mixed number of levels. For example,

More information

CHAPTER 4 MAINTENANCE STRATEGY SELECTION USING TOPSIS AND FUZZY TOPSIS

CHAPTER 4 MAINTENANCE STRATEGY SELECTION USING TOPSIS AND FUZZY TOPSIS 59 CHAPTER 4 MAINTENANCE STRATEGY SELECTION USING TOPSIS AND FUZZY TOPSIS 4.1 INTRODUCTION The development of FAHP-TOPSIS and fuzzy TOPSIS for selection of maintenance strategy is elaborated in this chapter.

More information

DesignDirector Version 1.0(E)

DesignDirector Version 1.0(E) Statistical Design Support System DesignDirector Version 1.0(E) User s Guide NHK Spring Co.,Ltd. Copyright NHK Spring Co.,Ltd. 1999 All Rights Reserved. Copyright DesignDirector is registered trademarks

More information

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT 2.1 BRIEF OUTLINE The classification of digital imagery is to extract useful thematic information which is one

More information

CHAPTER 2 DESIGN DEFINITION

CHAPTER 2 DESIGN DEFINITION CHAPTER 2 DESIGN DEFINITION Wizard Option The Wizard is a powerful tool available in DOE Wisdom to help with the set-up and analysis of your Screening or Modeling experiment. The Wizard walks you through

More information

Hierarchical Modeling and Analysis of Process Variations: the First Step Towards Robust Deep Sub-Micron Devices, DC Approach

Hierarchical Modeling and Analysis of Process Variations: the First Step Towards Robust Deep Sub-Micron Devices, DC Approach 1 Hierarchical Modeling and Analysis of Process Variations: the First Step Towards Robust Deep Sub-Micron Devices, DC Approach Pratt School Of Engineering Duke University Student: Devaka Viraj Yasaratne

More information

[Mahajan*, 4.(7): July, 2015] ISSN: (I2OR), Publication Impact Factor: 3.785

[Mahajan*, 4.(7): July, 2015] ISSN: (I2OR), Publication Impact Factor: 3.785 [Mahajan*, 4.(7): July, 05] ISSN: 77-9655 (IOR), Publication Impact Factor:.785 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY OPTIMIZATION OF SURFACE GRINDING PROCESS PARAMETERS

More information

Probability Models.S4 Simulating Random Variables

Probability Models.S4 Simulating Random Variables Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard Probability Models.S4 Simulating Random Variables In the fashion of the last several sections, we will often create probability

More information

EVALUATION OF OPTIMAL MACHINING PARAMETERS OF NICROFER C263 ALLOY USING RESPONSE SURFACE METHODOLOGY WHILE TURNING ON CNC LATHE MACHINE

EVALUATION OF OPTIMAL MACHINING PARAMETERS OF NICROFER C263 ALLOY USING RESPONSE SURFACE METHODOLOGY WHILE TURNING ON CNC LATHE MACHINE EVALUATION OF OPTIMAL MACHINING PARAMETERS OF NICROFER C263 ALLOY USING RESPONSE SURFACE METHODOLOGY WHILE TURNING ON CNC LATHE MACHINE MOHAMMED WASIF.G 1 & MIR SAFIULLA 2 1,2 Dept of Mechanical Engg.

More information

Application Of Taguchi Method For Optimization Of Knuckle Joint

Application Of Taguchi Method For Optimization Of Knuckle Joint Application Of Taguchi Method For Optimization Of Knuckle Joint Ms.Nilesha U. Patil 1, Prof.P.L.Deotale 2, Prof. S.P.Chaphalkar 3 A.M.Kamble 4,Ms.K.M.Dalvi 5 1,2,3,4,5 Mechanical Engg. Department, PC,Polytechnic,

More information

Basic Concepts And Future Directions Of Road Network Reliability Analysis

Basic Concepts And Future Directions Of Road Network Reliability Analysis Journal of Advanced Transportarion, Vol. 33, No. 2, pp. 12.5-134 Basic Concepts And Future Directions Of Road Network Reliability Analysis Yasunori Iida Background The stability of road networks has become

More information

Optimizing Pharmaceutical Production Processes Using Quality by Design Methods

Optimizing Pharmaceutical Production Processes Using Quality by Design Methods Optimizing Pharmaceutical Production Processes Using Quality by Design Methods Bernd Heinen, SAS WHITE PAPER SAS White Paper Table of Contents Abstract.... The situation... Case study and database... Step

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION Rapid advances in integrated circuit technology have made it possible to fabricate digital circuits with large number of devices on a single chip. The advantages of integrated circuits

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

CHAPTER 5 MAINTENANCE OPTIMIZATION OF WATER DISTRIBUTION SYSTEM: SIMULATED ANNEALING APPROACH

CHAPTER 5 MAINTENANCE OPTIMIZATION OF WATER DISTRIBUTION SYSTEM: SIMULATED ANNEALING APPROACH 79 CHAPTER 5 MAINTENANCE OPTIMIZATION OF WATER DISTRIBUTION SYSTEM: SIMULATED ANNEALING APPROACH 5.1 INTRODUCTION Water distribution systems are complex interconnected networks that require extensive planning

More information

DESIGN OF EXPERIMENTS and ROBUST DESIGN

DESIGN OF EXPERIMENTS and ROBUST DESIGN DESIGN OF EXPERIMENTS and ROBUST DESIGN Problems in design and production environments often require experiments to find a solution. Design of experiments are a collection of statistical methods that,

More information

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras Lecture - 37 Other Designs/Second Order Designs Welcome to the course on Biostatistics

More information

Modeling Plant Succession with Markov Matrices

Modeling Plant Succession with Markov Matrices Modeling Plant Succession with Markov Matrices 1 Modeling Plant Succession with Markov Matrices Concluding Paper Undergraduate Biology and Math Training Program New Jersey Institute of Technology Catherine

More information

17. SEISMIC ANALYSIS MODELING TO SATISFY BUILDING CODES

17. SEISMIC ANALYSIS MODELING TO SATISFY BUILDING CODES 17. SEISMIC ANALYSIS MODELING TO SATISFY BUILDING CODES The Current Building Codes Use the Terminology: Principal Direction without a Unique Definition 17.1 INTRODUCTION { XE "Building Codes" }Currently

More information

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models Gleidson Pegoretti da Silva, Masaki Nakagawa Department of Computer and Information Sciences Tokyo University

More information

Modeling with Uncertainty Interval Computations Using Fuzzy Sets

Modeling with Uncertainty Interval Computations Using Fuzzy Sets Modeling with Uncertainty Interval Computations Using Fuzzy Sets J. Honda, R. Tankelevich Department of Mathematical and Computer Sciences, Colorado School of Mines, Golden, CO, U.S.A. Abstract A new method

More information

Pradeep Kumar J, Giriprasad C R

Pradeep Kumar J, Giriprasad C R ISSN: 78 7798 Investigation on Application of Fuzzy logic Concept for Evaluation of Electric Discharge Machining Characteristics While Machining Aluminium Silicon Carbide Composite Pradeep Kumar J, Giriprasad

More information

Floating-Point Numbers in Digital Computers

Floating-Point Numbers in Digital Computers POLYTECHNIC UNIVERSITY Department of Computer and Information Science Floating-Point Numbers in Digital Computers K. Ming Leung Abstract: We explain how floating-point numbers are represented and stored

More information

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester.

AM205: lecture 2. 1 These have been shifted to MD 323 for the rest of the semester. AM205: lecture 2 Luna and Gary will hold a Python tutorial on Wednesday in 60 Oxford Street, Room 330 Assignment 1 will be posted this week Chris will hold office hours on Thursday (1:30pm 3:30pm, Pierce

More information

CHAPTER 3 MAINTENANCE STRATEGY SELECTION USING AHP AND FAHP

CHAPTER 3 MAINTENANCE STRATEGY SELECTION USING AHP AND FAHP 31 CHAPTER 3 MAINTENANCE STRATEGY SELECTION USING AHP AND FAHP 3.1 INTRODUCTION Evaluation of maintenance strategies is a complex task. The typical factors that influence the selection of maintenance strategy

More information

Performance Characterization in Computer Vision

Performance Characterization in Computer Vision Performance Characterization in Computer Vision Robert M. Haralick University of Washington Seattle WA 98195 Abstract Computer vision algorithms axe composed of different sub-algorithms often applied in

More information

1 Introduction. Myung Sik Kim 1, Won Jee Chung 1, Jun Ho Jang 1, Chang Doo Jung 1 1 School of Mechatronics, Changwon National University, South Korea

1 Introduction. Myung Sik Kim 1, Won Jee Chung 1, Jun Ho Jang 1, Chang Doo Jung 1 1 School of Mechatronics, Changwon National University, South Korea Application of SolidWorks & AMESim - based Simulation Technique to Modeling, Cavitation, and Backflow Analyses of Trochoid Hydraulic Pump for Multi-step Transmission Myung Sik Kim 1, Won Jee Chung 1, Jun

More information

Part I, Chapters 4 & 5. Data Tables and Data Analysis Statistics and Figures

Part I, Chapters 4 & 5. Data Tables and Data Analysis Statistics and Figures Part I, Chapters 4 & 5 Data Tables and Data Analysis Statistics and Figures Descriptive Statistics 1 Are data points clumped? (order variable / exp. variable) Concentrated around one value? Concentrated

More information

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.685 Electric Machines Class Notes 11: Design Synthesis and Optimization February 11, 2004 c 2003 James

More information

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR Detecting Missing and Spurious Edges in Large, Dense Networks Using Parallel Computing Samuel Coolidge, sam.r.coolidge@gmail.com Dan Simon, des480@nyu.edu Dennis Shasha, shasha@cims.nyu.edu Technical Report

More information

Learning Objectives. Continuous Random Variables & The Normal Probability Distribution. Continuous Random Variable

Learning Objectives. Continuous Random Variables & The Normal Probability Distribution. Continuous Random Variable Learning Objectives Continuous Random Variables & The Normal Probability Distribution 1. Understand characteristics about continuous random variables and probability distributions 2. Understand the uniform

More information

Supporting Information. High-Throughput, Algorithmic Determination of Nanoparticle Structure From Electron Microscopy Images

Supporting Information. High-Throughput, Algorithmic Determination of Nanoparticle Structure From Electron Microscopy Images Supporting Information High-Throughput, Algorithmic Determination of Nanoparticle Structure From Electron Microscopy Images Christine R. Laramy, 1, Keith A. Brown, 2, Matthew N. O Brien, 2 and Chad. A.

More information

Position Error Reduction of Kinematic Mechanisms Using Tolerance Analysis and Cost Function

Position Error Reduction of Kinematic Mechanisms Using Tolerance Analysis and Cost Function Position Error Reduction of Kinematic Mechanisms Using Tolerance Analysis and Cost Function B.Moetakef-Imani, M.Pour Department of Mechanical Engineering, Faculty of Engineering, Ferdowsi University of

More information

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski Data Analysis and Solver Plugins for KSpread USER S MANUAL Tomasz Maliszewski tmaliszewski@wp.pl Table of Content CHAPTER 1: INTRODUCTION... 3 1.1. ABOUT DATA ANALYSIS PLUGIN... 3 1.3. ABOUT SOLVER PLUGIN...

More information

1. Introduction. 2. Modelling elements III. CONCEPTS OF MODELLING. - Models in environmental sciences have five components:

1. Introduction. 2. Modelling elements III. CONCEPTS OF MODELLING. - Models in environmental sciences have five components: III. CONCEPTS OF MODELLING 1. INTRODUCTION 2. MODELLING ELEMENTS 3. THE MODELLING PROCEDURE 4. CONCEPTUAL MODELS 5. THE MODELLING PROCEDURE 6. SELECTION OF MODEL COMPLEXITY AND STRUCTURE 1 1. Introduction

More information

EFFECT OF CUTTING SPEED, FEED RATE AND DEPTH OF CUT ON SURFACE ROUGHNESS OF MILD STEEL IN TURNING OPERATION

EFFECT OF CUTTING SPEED, FEED RATE AND DEPTH OF CUT ON SURFACE ROUGHNESS OF MILD STEEL IN TURNING OPERATION EFFECT OF CUTTING SPEED, FEED RATE AND DEPTH OF CUT ON SURFACE ROUGHNESS OF MILD STEEL IN TURNING OPERATION Mr. M. G. Rathi1, Ms. Sharda R. Nayse2 1 mgrathi_kumar@yahoo.co.in, 2 nsharda@rediffmail.com

More information

Chapter Two: Descriptive Methods 1/50

Chapter Two: Descriptive Methods 1/50 Chapter Two: Descriptive Methods 1/50 2.1 Introduction 2/50 2.1 Introduction We previously said that descriptive statistics is made up of various techniques used to summarize the information contained

More information

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization Sirisha Rangavajhala

More information

OPTIMIZING A VIDEO PREPROCESSOR FOR OCR. MR IBM Systems Dev Rochester, elopment Division Minnesota

OPTIMIZING A VIDEO PREPROCESSOR FOR OCR. MR IBM Systems Dev Rochester, elopment Division Minnesota OPTIMIZING A VIDEO PREPROCESSOR FOR OCR MR IBM Systems Dev Rochester, elopment Division Minnesota Summary This paper describes how optimal video preprocessor performance can be achieved using a software

More information

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution

More information

Applying Supervised Learning

Applying Supervised Learning Applying Supervised Learning When to Consider Supervised Learning A supervised learning algorithm takes a known set of input data (the training set) and known responses to the data (output), and trains

More information

4. Image Retrieval using Transformed Image Content

4. Image Retrieval using Transformed Image Content 4. Image Retrieval using Transformed Image Content The desire of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). A class of unitary matrices

More information

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability 7 Fractions GRADE 7 FRACTIONS continue to develop proficiency by using fractions in mental strategies and in selecting and justifying use; develop proficiency in adding and subtracting simple fractions;

More information

QstatLab: software for statistical process control and robust engineering

QstatLab: software for statistical process control and robust engineering QstatLab: software for statistical process control and robust engineering I.N.Vuchkov Iniversity of Chemical Technology and Metallurgy 1756 Sofia, Bulgaria qstat@dir.bg Abstract A software for quality

More information

Introduction to Control Systems Design

Introduction to Control Systems Design Experiment One Introduction to Control Systems Design Control Systems Laboratory Dr. Zaer Abo Hammour Dr. Zaer Abo Hammour Control Systems Laboratory 1.1 Control System Design The design of control systems

More information

Driven Cavity Example

Driven Cavity Example BMAppendixI.qxd 11/14/12 6:55 PM Page I-1 I CFD Driven Cavity Example I.1 Problem One of the classic benchmarks in CFD is the driven cavity problem. Consider steady, incompressible, viscous flow in a square

More information

Lab 5 - Risk Analysis, Robustness, and Power

Lab 5 - Risk Analysis, Robustness, and Power Type equation here.biology 458 Biometry Lab 5 - Risk Analysis, Robustness, and Power I. Risk Analysis The process of statistical hypothesis testing involves estimating the probability of making errors

More information

Improvement of a tool for the easy determination of control factor interaction in the Design of Experiments and the Taguchi Methods

Improvement of a tool for the easy determination of control factor interaction in the Design of Experiments and the Taguchi Methods Improvement of a tool for the easy determination of control factor in the Design of Experiments and the Taguchi Methods I. TANABE, and T. KUMAI Abstract In recent years, the Design of Experiments (hereafter,

More information

Psychology 282 Lecture #21 Outline Categorical IVs in MLR: Effects Coding and Contrast Coding

Psychology 282 Lecture #21 Outline Categorical IVs in MLR: Effects Coding and Contrast Coding Psychology 282 Lecture #21 Outline Categorical IVs in MLR: Effects Coding and Contrast Coding In the previous lecture we learned how to incorporate a categorical research factor into a MLR model by using

More information

Chapter 2 On-Chip Protection Solution for Radio Frequency Integrated Circuits in Standard CMOS Process

Chapter 2 On-Chip Protection Solution for Radio Frequency Integrated Circuits in Standard CMOS Process Chapter 2 On-Chip Protection Solution for Radio Frequency Integrated Circuits in Standard CMOS Process 2.1 Introduction Standard CMOS technologies have been increasingly used in RF IC applications mainly

More information

Downloaded from

Downloaded from UNIT 2 WHAT IS STATISTICS? Researchers deal with a large amount of data and have to draw dependable conclusions on the basis of data collected for the purpose. Statistics help the researchers in making

More information

PTE 519 Lecture Note Finite Difference Approximation (Model)

PTE 519 Lecture Note Finite Difference Approximation (Model) PTE 519 Lecture Note 3 3.0 Finite Difference Approximation (Model) In this section of the lecture material, the focus is to define the terminology and to summarize the basic facts. The basic idea of any

More information

STATISTICS (STAT) Statistics (STAT) 1

STATISTICS (STAT) Statistics (STAT) 1 Statistics (STAT) 1 STATISTICS (STAT) STAT 2013 Elementary Statistics (A) Prerequisites: MATH 1483 or MATH 1513, each with a grade of "C" or better; or an acceptable placement score (see placement.okstate.edu).

More information

INTRODUCTION OF STATISTICAL DECISION MAKING AND MEASUREMENT CONTROL CHARTS INTO RIT CLEANROOM FACILITY

INTRODUCTION OF STATISTICAL DECISION MAKING AND MEASUREMENT CONTROL CHARTS INTO RIT CLEANROOM FACILITY INTRODUCTION OF STATISTICAL DECISION MAKING AND MEASUREMENT CONTROL CHARTS INTO RIT CLEANROOM FACILITY By Matthew L. Blair 5th Year Microelectronic Engineering Student Rochester Institute of Technology

More information

The Ohio State University Columbus, Ohio, USA Universidad Autónoma de Nuevo León San Nicolás de los Garza, Nuevo León, México, 66450

The Ohio State University Columbus, Ohio, USA Universidad Autónoma de Nuevo León San Nicolás de los Garza, Nuevo León, México, 66450 Optimization and Analysis of Variability in High Precision Injection Molding Carlos E. Castro 1, Blaine Lilly 1, José M. Castro 1, and Mauricio Cabrera Ríos 2 1 Department of Industrial, Welding & Systems

More information

APPENDIX K MONTE CARLO SIMULATION

APPENDIX K MONTE CARLO SIMULATION APPENDIX K MONTE CARLO SIMULATION K-1. Introduction. Monte Carlo simulation is a method of reliability analysis that should be used only when the system to be analyzed becomes too complex for use of simpler

More information

Cpk: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc.

Cpk: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc. C: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc. C is one of many capability metrics that are available. When capability metrics are used, organizations typically provide

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No.

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No. Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture No. # 13 Transportation Problem, Methods for Initial Basic Feasible

More information

Shape fitting and non convex data analysis

Shape fitting and non convex data analysis Shape fitting and non convex data analysis Petra Surynková, Zbyněk Šír Faculty of Mathematics and Physics, Charles University in Prague Sokolovská 83, 186 7 Praha 8, Czech Republic email: petra.surynkova@mff.cuni.cz,

More information

Optimisation of Quality and Prediction of Machining Parameter for Surface Roughness in CNC Turning on EN8

Optimisation of Quality and Prediction of Machining Parameter for Surface Roughness in CNC Turning on EN8 Indian Journal of Science and Technology, Vol 9(48), DOI: 10.17485/ijst/2016/v9i48/108431, December 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Optimisation of Quality and Prediction of Machining

More information

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS SUBMITTED BY: NAVEEN MATHEW FRANCIS #105249595 INTRODUCTION The advent of new technologies

More information

EXPERIMENTAL INVESTIGATION OF A CENTRIFUGAL BLOWER BY USING CFD

EXPERIMENTAL INVESTIGATION OF A CENTRIFUGAL BLOWER BY USING CFD Int. J. Mech. Eng. & Rob. Res. 2014 Karthik V and Rajeshkannah T, 2014 Research Paper ISSN 2278 0149 www.ijmerr.com Vol. 3, No. 3, July 2014 2014 IJMERR. All Rights Reserved EXPERIMENTAL INVESTIGATION

More information

EE434 ASIC & Digital Systems Testing

EE434 ASIC & Digital Systems Testing EE434 ASIC & Digital Systems Testing Spring 2015 Dae Hyun Kim daehyun@eecs.wsu.edu 1 Introduction VLSI realization process Verification and test Ideal and real tests Costs of testing Roles of testing A

More information

Spatial Patterns Point Pattern Analysis Geographic Patterns in Areal Data

Spatial Patterns Point Pattern Analysis Geographic Patterns in Areal Data Spatial Patterns We will examine methods that are used to analyze patterns in two sorts of spatial data: Point Pattern Analysis - These methods concern themselves with the location information associated

More information

Robust Design. Moonseo Park Architectural Engineering System Design. June 3rd, Associate Professor, PhD

Robust Design. Moonseo Park Architectural Engineering System Design. June 3rd, Associate Professor, PhD Robust Design 4013.315 Architectural Engineering System Design June 3rd, 2009 Moonseo Park Associate Professor, PhD 39 동 433 Phone 880-5848, Fax 871-5518 E-mail: mspark@snu.ac.kr Department of Architecture

More information

Chapter-1: Solutions to Supplementary Exercises

Chapter-1: Solutions to Supplementary Exercises Chapter-: Solutions to Supplementary Exercises.S.. ablet Weights (a) A weight of 85 mg corresponds to a Z value of.4 as shown below the standard Normal table gives an area of 0.06 to the left of.4, so

More information