UPPER CONFIDENCE LIMIT FOR NON-NORMAL INCAPABILITY INDEX: A CASE STUDY

Size: px
Start display at page:

Download "UPPER CONFIDENCE LIMIT FOR NON-NORMAL INCAPABILITY INDEX: A CASE STUDY"

Transcription

1 مو تمر الا زهر الهندسي الدولي العاشر AL-AZHAR ENGINEERING TENTH INTERNATIONAL ONFERENE December 4-6, 008 ode: M 01 UPPER ONFIDENE LIMIT FOR NON-NORMAL INAPABILITY INDEX: A ASE STUDY A. Rotondo 1 and Amir A. Mahdy 1 Dipartimento di Ingegneria Meccanica e Gestionale, Politecnico di Bari, viale Japigia 18, 7016, Bari, Italy Mining and Petroleum Eng. Dept., Faculty of Engineering, Al-Azhar University, (Post Doc. Fellowship, School Mechanical and Manufacturing Eng., Dublin ity University, Dublin 9, Ireland) ABSTRAT: Process capability indices (PIs) are widely adopted in industrial environment since they are a simple tool to assess the level of conformation of products to customers specifications and, hence, to monitor process performances. Among PIs, the process incapability index, pp, has progressively obtained large consideration since it is easy to apply and analytically convenient. Moreover, it keeps uncontaminated information about process accuracy and process precision. As for other PIs, the applicability of pp can also be extended to non-normal process by means of lements s method [1]; however, with this reformulation, the non-normal pp distribution becomes mathematically intractable. Another question with the non-normal PIs concerns the reliability and the accuracy of the Pearson distributions system percentiles. In this study, a non-normal pp, pp(q), based on Burr XII distribution is proposed and its upper confidence interval is evaluated by means of the bootstrap resampling method. The upper confidence bound of pp(q) can be considered a reliable indicator of product quality and process performance, since it allows avoiding problems caused by sampling errors. The procedure proposed in this paper can be easily implemented and very poor statistical background is sufficient for handling its results. Finally, a numerical example is presented. 008 Faculty of Engineering, Al-Azhar University, airo, Egypt. All rights reserved. KEYWORDS: Process Incapability Index; Burr Xii Distribution; Bootstrap Methodology. 1.INTRODUTION Process capability analysis is used to evaluate the level of conformation of products to their specifications. It enables both the monitoring and the reduction of the variability of industrial processes []. The simplest tools to assess process capability are process capability indices (PIs). They provide a single number and unitless measure of process potential and performance which reflects the ability of the process to meet specifications limits [3]. Their simplicity makes them usable as a communication medium between both quality engineers and shop floor controllers, interested in improving process performance. They can be also used during contract stipulation [4]. Al-Azhar University Engineering Journal, JAUES Vol. 3, No. 11, Dec

2 In the last two decades, several PIs have been developed involving even more criteria in capability measurement: process variation, process departure, process yield and process loss [5]. The first indices were defined by Juran and Kane as follows: p USL LSL = (1) 6σ {, } USL-µ µ LSL pk = min pu pl = min ; () 3σ 3σ where µ and σ are the process mean and process standard deviation and USL and LSL are the upper and the lower specification respectively. When process parameters are unknown, the sample mean, x, and the sample standard deviation, S, can replace them. With respect to p, pk presents the advantage to consider, along with process variability, the degree of process mean shift from the center of the specification interval. Moreover, the two sub-indices that appear in pk definition, pu and pl, can be used in case of unilateral specifications. However, this first category of indices does not take the target value, T, into any account. That means, process loss caused by mean departure from the target is completely neglected [6]. A new class of indices has been introduced to remedy this disadvantage: pm USL LSL = 6 σ ( µ T ) + (3) pmk = min 3 USL-µ ; µ LSL ( µ T ) + σ 3 ( µ T ) + σ (4) The statistical properties of pm natural estimator are mathematically intractable. Accordingly, Greenwich and Jahr-Schaffrath [7] proposed an incapability index pp, which is a simple transformation of pm, but it results easier to use and analytically convenient [8]. An overview on the pp properties will be presented in the next section. The definition of such PIs has been generally based on three assumptions: (1) the system determining which data are collected is under control; () the collected data are independent and identically distributed; (3) the process data are normally distributed, that is the process must be normal []. However, not always these conditions and above all the last one are respected in an industrial environment. An uncritical use of these indices may lead to erroneous considerations. Many studies have been conducted for handling non-normal processes and two simple approaches have emerged. The first one consists of transforming original data into normally distributed data: models suitable for any type of data are available. See, for instance [9-11]. The second one suggests the use of non-normal percentile in place of sample mean and standard deviation to calculate PIs. lements used the last approach when proposing the method of non-normal percentiles to calculate p and pk for a distribution of any shape, using the Pearson family of curves []. Based on lements method, Pearn and Kotz [1] further modified PIs as follows: Vol. 3, No. 11, Dec. 008

3 p( q ) = USL LSL x x (5) pk( q ) USL x = min x x ; x x LSL x (6) pm( q ) ( USL T; T LSL) min = (7) x x ( x ) T 6 = min 3 USL x x 3 LSL pmk ( q ) ; (8) x x x ( ) 0. x + ( ) T 3 x T where p is the p*100 percentile value of non-normal data; in particular, in lements method, it represents the p*100 percentile of the Pearson distribution family and can be easily obtained by evaluating sample skewness and kurtosis coefficients. Lin and hen [] demonstrated that the Burr type XII distribution, if used for evaluating nonnormal percentiles, in place of the Pearson curves, generally provides slightly better results and works especially well under distributions which moderately depart from normality. In spite of noticeable efforts have addressed non-normal process capability analysis, the problem of the mathematical intractability of the non-normal PIs does not allow to neither investigate their statistical properties nor analytically make statistical inference on them. However, due to sampling errors, which frequently occur during data collection, the comparison between different entities under investigation, for instance suppliers or processes, based on an estimation of the chosen PI cannot be considered reliable. That s why confidence interval evaluation is favourable in making decisions. At the authors knowledge, a confidence interval of pp(q) based on Pearson distribution family has never been developed. Moreover, the calculation of pp(q) using the Burr distribution system has never been proposed in the open literature. In this study, the upper confidence bound of the non-normal incapability index, pp(q), built by means of the bootstrap resampling simulation method, is proposed as an effective indicator usable for both selecting suppliers and monitoring process or guiding quality engineers in improving process performance. The Burr-based approach has been used. About the index proposed, a case study reveals its relevant capability of discerning between processes of different quality level.. INAPABILITY INDEX PP The process incapability index, pp, was introduced by Greenwich and Jahr-Schaffrath [7] as a simple transformation of pm 3 x Vol. 3, No. 11, Dec

4 pp 1 = pm µ T = D σ + D (9) where D ( USL LSL) / 6 =. This index is easy to apply and enjoys a large consideration in studies on capability analysis since it provides more information than other commonly used PIs. Indeed, first, it inherits the own properties of pm, which primarily are the consideration of process location and process centering (that is the ability to distinguish between on-target and off-target processes). Second, it can be easily considered as the sum of two sub-indices, ( T ) / D µ, also called inaccuracy index, ia, and σ / D denoted by ip, or imprecision index. ia takes into account process loss caused by mean departure from the target, while ip reflects the extent of process variation [3]. The uncontamination of information concerning accuracy and precision represents a useful tool for both individuating the reasons of an inadequate process capability (too high pp values) and distinguishing between entities eventually characterized by the same pp, or pm, value. Some commonly used values of pp with the related process denomination are reported in table 1. Table lists some commonly used values of ip and the corresponding quality condition. The pp distribution was studied by hen [13] under normality assumption; hence confidence interval evaluation is possible. Even when process parameters are unknown, an interval estimate of pp can be obtained, since Greenwich and Jahr-Schaffrath [7] found that Ĉ pp / pp is approximately distributed as χ v / v, Ĉ pp being the natural estimator of pp, χ v a chi-square distribution with v degrees of freedom and v n + [( T )/ σ ] { 1 µ } /{ 1+ [ ( µ T )/ σ ] } =. Due to its features, the pp index has been widely used in process capability analysis. hen et al. used pp for contract manufacturers selection by both developing a score index based on pp confidence bounds [14], and constructing a supplier capability and price analysis chart, where considerations about product price were also included [3]. Pearn et al. proposed a multiprocess performance analysis chart based on pp, which displays multiple processes with departure and process variability relative to the specification tolerances on one single chart [15]. Vol. 3, No. 11, Dec

5 Table 1 Some commonly used pp and the corresponding pm values Table Some commonly used ip and the corresponding quality condition pp pm Quality condition Precision requirements apable 0.56 ip 1.00 Satisfactory 0.44 ip 0.56 Good 0.36 ip 0.44 Excellent 0.5 ip 0.36 Super ip 0.5 When the process characteristic is not normally distributed, the relation between pp and pm can be used to define a non-normal pp, pp(q), as follows: pp ( q ) 1 = pm( q ) x = x 6D* x T + D* (10) being D* min (( USL T ); ( T LSL) )/ 3 =. Note that, unlike pp, due to pm(q) formulation, pp(q) can also be adopted in case of asymmetric tolerances. However, the distribution of pp(q) results mathematically intractable. The percentiles in Eq. 10 are usually evaluated using Pearson distribution family of curves. Basing on Lin and hen s suggestion [], in this study, pp non-normal percentiles are evaluated using the Burr system of distribution, which proved to be more accurate in PIs estimates than lements s method. 3. THE BURR XII DISTRIBUTION The Burr type XII distribution was introduced by Burr in 194 [16]. Its cumulative distribution function and the corresponding probability density function respectively are: F f c 1 ( ) ( 1+ x ) x = 0 k c ( ) 1 c ( 1 ) ( k + x = kcx + x 1) for x 0 for x < 0 (11) (1) Vol. 3, No. 11, Dec

6 The region of coverage of standardized moments a 3 and a 4 was still further extended, toward low a 4 for given a 3, by means of the reciprocal transformation x=1/y which yields the following cumulative distribution function: G c ( ) ( 1+ y ) y = 0 k for y 0. for y < 0 (13) With this extension, Burr showed that an appropriate choice of the distribution parameters, c and k, allows coverage of a large proportion of curve shape characteristics of types I, II, II, IV and VI in the Pearson family [17]; that means that the well known normal, Weibull, logistic, lognormal, gamma, beta and extreme value type I distributions are all covered by the Burr XII system. Burr [18] provided tables where several combinations of c and k are related with the expected value, the standard deviation, the standardized skewness coefficient and the standardized kurtosis coefficient of the Burr XII distribution. These two last parameters represent the link between a random variate (X) and a Burr variate (U). When a sample is drawn from X, the sample skewness coefficient (a 3 ) and the sample kurtosis coefficient (a 4 ) allow the fitted parameters of the Burr XII distribution to be obtained using the tables. Then, a standardized transformation relates X with U: X x S x U = µ σ where x and S x are the sample mean and standard deviation and µ and σ are the mean and the standard deviation of the corresponding Burr variate. Besides the extended coverage area, the Burr XII distribution has the property that its cumulative distribution function exists in closed form [19]; hence, an algebraic transformation allows distribution percentiles to be obtained, useful in non-normal PIs evaluation. Its extreme flexibility makes it suitable for describing data from the real world. Many authors have used it as a life time model in reliability analysis. Wang, based on Burr XII distribution, presented a model for modelling failure rate of mechanical and electronic components [0]. It was also used by Abdel-Ghaly et al. for developing a reliability growth model [1]. Good fits are also obtained when biological, clinical and other experimental data have to be described. Ali Mousa et al. provided maximum likelihood and Bayes estimates of the two parameters of the Burr XII distribution for progressive type II censored data []. Zimmer reported the maximum likelihood method for fitting Burr XII parameters to data under any type of conditions [19]. 4. THE BOOTSTRAP METHOD The normality assumption, often used in statistical research and generally made possible by the central limit theorem, presents several advantages among which the possibility of evaluating statistical errors and confidence intervals. However, it provides accurate results only if statistics or statistics transformations are asymptotically normally distributed. In an industrial environment, such a property is not always verified and a normality approximation may yield misleading conclusions. In order to avoid these problems, thanks to computation Vol. 3, No. 11, Dec

7 advances, more realistic models can be adopted by using methods which often are too complicated for practical analytical treatment. Among these, the bootstrap methodology has obtained great success and diffusion. It is used to solve problems generally characterized by mathematical intractability and works well when the population under examination is either normally or not normally distributed. It was initiated by Efron [3] in 1979 and it relies on the idea that the sampling distribution for a statistic, θ, is estimated by the relative frequency distribution of the θ values calculated for each one of the samples of size n randomly drawn from a random population sample. The unique requirements are that the random sample must be representative of the population and that the observations are independent and identically distributed. The original sample, of size n, {x 1, x,,x n }, is treated as the population X; then, a Monte arlo style procedure draws, with replacement, from it a large number, B, of resamples, also called bootstrap samples, * * * { x 1,x, K,xB }. Theoretically, the replacement allows to create n n different samples from the original one, but 1000 resamples are generally considered enough for obtaining an accurate confidence interval estimate [4]. If θ is the parameter by which process performance is * controlled, the bootstrap estimate of θ, ˆθ, can be evaluated for each bootstrap sample. Then, the relative frequency histogram of the B bootstrap estimates, { ˆ * * θ, ˆ θ,, ˆ θ } * 1 K B, can be built; the distribution obtained is the bootstrap estimate of the sampling distribution of θ. This distribution can be used to make statistical inference on it. Different types of bootstrap confidence interval have been developed; the most commonly used are the standard bootstrap (SB) confidence interval, the percentile bootstrap (PB) confidence interval and the biased corrected percentile bootstrap (BPB) confidence interval [6]. onstruction of a two-sided (1- α )100% BPB confidence limits will be described, so that a lower (1-α )100% confidence interval can be obtained by using only the lower limit [6]. The bootstrap distribution built on only a sample of the complete bootstrap distribution may be biased, that is, shifted higher or lower than expected. In order to correct the possible bias, a third procedure was developed, which consists of the following steps. First, the probability p P[ ˆ θ * ˆ 0 = θ ] is calculated using the ordered distribution of { ˆθ } * i, i=1,, B; θˆ is the value of θ estimated from the original sample. Secondly, the inverse of the cumulative distribution function of a standard normal based upon p 0 is computed as z 0 = Φ -1 (p 0 ), p L = Φ( z 0 z α ), p U = Φ( z 0 + z α ), where Φ () is the cumulative standard normal distribution function. Finally, the BPB confidence interval is obtained as follows [ ˆ * ( ) ˆ * plb, θ ( pu B) ] θ. In the literature, confidence intervals of some PIs have been computed by means of bootstrap method. hen and Jang constructed the BPB confidence interval of the difference between pk s of two suppliers and proposed it as a supplier selection criterion. They proved the robustness of BPB confidence interval with varying of distribution parameters and sample size [4]. hen and Pearn derived the confidence interval for S pk index [5]. Pearn et al. evaluated the upper confidence limit for the Y q index and found that the bootstrap method provides more reliable statistical inferences than those obtained by means of normality approximation of the Q-yield distribution when large samples are not available [6]. 5. BPB UPPER ONFIDENE BOUND FOR pp(q) Unlike the most common PIs, pp(q) is a the-smaller-the-better type index. That implies that when it is chosen as a critical indicator, a reliable analysis on process or suppliers Vol. 3, No. 11, Dec

8 performances should be based on its upper confidence limit, that is, on the worst conditions the process or the supplier could perform. If the processes under investigation are independently distributed, without any other requirement on neither specifications or distributions, the BPB upper confidence limit for pp(q), for each process, can be computed as follows: Step 1. onsider a single process. Step. Randomly select a sample of size n from the process population. This is called the original sample. Step 3. Use the bootstrap resampling method to draw, with replacement, a bootstrap sample of size n from the original one. Step 4. From the i-th (i=1,, B) bootstrap sample calculate pp(q). This estimate is denoted by * Ĉ pp( q )i. Step 5. Repeat steps 3 and 4 B times (B=1000). Step 6. Sort in ascending order the B values of Ĉ * pp( q ) i. They are referred to as Ĉ * () 1,Ĉ ( ),..., ( B) * pp( q ) pp( q ) Ĉ * pp ( q ). Step 7. Use the BPB confidence interval procedure to obtain the upper confidence bound of pp(q). Step 8. Repeat from steps 1 to 7 for each process. In terms of capability, the best process should be the one characterized by the lowest upper confidence limit. The entire procedure is implemented in a MATLAB 7.0 code. 6. AN APPLIATION In order to demonstrate how to use the upper confidence bound of pp(q) in making decisions, the proposed procedure is applied to data available in [4]. These data refer to two suppliers, who provided aluminium foil materials to an electronics company. The capability analysis is conducted on the voltage of aluminium foil which is considered one of the most important characteristic of the foils when they are used as capacitors components. The production specifications, expressed in [WV], have been set to USL= 530, T=, LSL=510. If the voltage exceeds specification limit, the aluminium foil must be rejected. Statistical information about the random samples drawn from the aluminium foil production process are listed in table 3. The 95% unilateral confidence interval of pp(q) has been derived for both the suppliers. For the first one the upper confidence bound (UB) of pp(q) resulted equal to 0.701, so that the process can be nearly considered super, for the second one the simulation produced a pp(q) upper confidence bound of , so the process is no more capable (Table 3). Evidently, with a 95% confidence, the first supplier presents better performances than the second one. This result confirms the consideration made in [4]. Table 3. Statistical information about the two samples pp(q) and pp(q) UB values. x [WV] S [WV] skewness kurtosis a3 a4 pp(q) pp(q) UB Supplier Supplier Vol. 3, No. 11, Dec

9 7. onclusion The process incapability index, pp, differs from the other commonly used PIs since it is easy to apply, analytically convenient and, above all, because of its property of keeping uncontaminated information about process accuracy and process precision. But as many other PIs, in its original formulation, can only be applied on normal processes. This limitation can be easily overcome if the non-normal pm formulation is used to derive a non-normal pp index, pp(q). Basing on lements s method, the required percentile values for non-normal PIs have usually been estimated by means of Pearson frequency distribution family. However, recent studies have proved that the percentiles of the Pearson distribution system can t be considered very reliable when used in capability analysis. The use of the Burr XII distribution is suggested. On the other hand, the Burr XII distribution appears particularly convenient since it is expressed in closed form and its percentiles can be calculated by algebraic transformations. In this study, the pp(q) based on Burr XII distribution is proposed. Because of its mathematical intractability, its unilateral BPB confidence interval has been calculated by the bootstrap resampling method. The upper confidence limit of pp(q) can be considered an efficient indicator when suppliers and processes performances have to be compared or monitored; moreover, the procedure followed to calculate it can be easily implemented and very poor statistical background is sufficient for handling its results. A numerical example presented confirmed the considerations obtained using other approaches. REFERENES: [1] lements, J.A. Process capability calculations for non-normal distributions. Quality Progress, Vol, pp 95-98, [] Liu, Pei-His; hen, Fei-Long Process capability analysis of non-normal process data using the Burr XII distribution. International Journal of Advanced Manufacturing Technology, Vol 7, pp , 006. [3] hen, K.L.; hen, K.S.; Li, R.K. Suppliers capability and price analysis chart. International journal of production economics, Vol 98, pp , 005. [4] Huang, M.L.; hen, K.S. apability analysis for a Multi-Process Product with Bilateral Specifications. International Journal of Advanced Manufacturing Technology, Vol 1, pp , 003. [5] Pearn, W.L.; Shu, Ming-Hung Measuring manufacturing capability based on lower confidence bound of pmk applied to current transmitter process. International Journal of Advanced Manufacturing Technology, Vol 3, pp , 004. [6] Pearn, W.L.; hang, Y..; Wu, hien-wei Bootstrap approach for estimating process quality yield with application to light emitting diodes. International Journal of Advanced Manufacturing Technology, Vol 5, pp , 005. [7] Greenwich, M.; Jahr-Schaffrath, B.L. A process incapability index. International Journal of Quality and Reliability Management, Vol 1(4), pp 58-71, [8] Wu,..; Kuo, H.L.; hen, K.S. Implementing process capability indices for a complete product. International Journal of Advanced Manufacturing Technology, Vol 4, pp , 004. [9] Johnson, N.L. System of frequency curves generated by methods of translation. Biometrika Vol 36, pp , [10] Box, G.E.P.; ox, D.R. An analysis of transformation. Journal of the Royal Statistical Society: Series B, Vol 96, pp 11-43, Vol. 3, No. 11, Dec

10 [11] Sommerville, S.; Montgomery, D. Process capability indices and non-normal distributions. Quality Engineering, Vol 19(), pp , [1] Pern, W.L.; Kotz, S. Application of lements method for calculating second and third generation process capability indices for non-normal Pearsonian populations Quality Engineering, Vol 7(1), pp , [13] hen, K.S. Estimation of the process incapability index ommunications in statistics: Theory and Methods, Vol 7(4), pp , [14] hen, K.S.; hen, K.L.; Li, R.K. ontract manufacturer selection by using the process incapability index pp International Journal of Advanced Manufacturing Technology, Vol 6, pp , 005. [15] Pearn, W.L.; Ko,.H.; Wang, K.H. A multiprocess performance analysis chart based on the incapability index pp: an application to the chip resistors Microelectronics Reliability, Vol 4, pp , 00. [16] Burr, I.W. umulative frequency functions Annals of Mathematical Statistic, Vol 13, pp 15-3, 194. [17] Burr, I.W.; islak, P.J. On a general system of distribution, I. Its curve characteristics, II. The sample median. Journal of American Statistical Association, Vol 63, pp , [18] Burr, I.W. Parameters for a general system of distributions to match a grid of 3 and 4. ommunications in Statistics, Vol (1), pp 1-1, [19] Wang, F.K.; Keats, J.B.; Zimmer, W.J. Maximum likelihood estimation of the Burr XII parameters with censored and uncensored data. Microelectronics Reliability, Vol 36(3), pp , 1996 [0] Wang, F.K. A new model with bathtub-shaped failure rate using an additive Burr XII distribution. Reliability Engineering and system safety, Vol 70, pp , 000. [1] Abdel-Ghaly, A.A.; Al-Dayian, G.R.; Al-Kashkari, F.H. The use of Burr type 1 distribution on software reliability growth modelling Microelectronics Reliability, Vol 37(), pp , [] Ali Mousa, M.A.M.; Jaheen, Z.F. Statistical inference for the Burr model based on progressively censored data. omputers and Mathematics with Applications, Vol 43, pp , 00. [3] Efron, B. Bootstrap methods: another look at jackknife. Annals of Statistics, Vol 7, pp 1-6, [4] hen, J.P.; Tong, L.I. Bootstrap onfidence Interval of the Difference between Two Process apability Indices. International Journal of Advanced Manufacturing Technology, Vol 1, pp 49-56, 003. [5] hen, J.P.; Pearn, W.L. Testing process performance based on the yield: an application to the liquid-crystal display module. Microelectronics Reliability, Vol 4, pp , 00. Vol. 3, No. 11, Dec

Bootstrap Confidence Interval of the Difference Between Two Process Capability Indices

Bootstrap Confidence Interval of the Difference Between Two Process Capability Indices Int J Adv Manuf Technol (2003) 21:249 256 Ownership and Copyright 2003 Springer-Verlag London Limited Bootstrap Confidence Interval of the Difference Between Two Process Capability Indices J.-P. Chen 1

More information

What is Process Capability?

What is Process Capability? 6. Process or Product Monitoring and Control 6.1. Introduction 6.1.6. What is Process Capability? Process capability compares the output of an in-control process to the specification limits by using capability

More information

The Bootstrap and Jackknife

The Bootstrap and Jackknife The Bootstrap and Jackknife Summer 2017 Summer Institutes 249 Bootstrap & Jackknife Motivation In scientific research Interest often focuses upon the estimation of some unknown parameter, θ. The parameter

More information

APPROACHES TO THE PROCESS CAPABILITY ANALYSIS IN THE CASE OF NON- NORMALLY DISTRIBUTED PRODUCT QUALITY CHARACTERISTIC

APPROACHES TO THE PROCESS CAPABILITY ANALYSIS IN THE CASE OF NON- NORMALLY DISTRIBUTED PRODUCT QUALITY CHARACTERISTIC APPROACHES TO THE PROCESS CAPABILITY ANALYSIS IN THE CASE OF NON- NORMALLY DISTRIBUTED PRODUCT QUALITY CHARACTERISTIC Jiří PLURA, Milan ZEMEK, Pavel KLAPUT VŠB-Technical University of Ostrava, Faculty

More information

QUALITY AND RELIABILITY CORNER IJQRM 21,1

QUALITY AND RELIABILITY CORNER IJQRM 21,1 The Emerald Research Register for this journal is available at www.em eraldinsight.com/res earchregister The current issue and full text archive of this journal is available at www.em eraldinsight.com/0265-671x.htm

More information

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea Chapter 3 Bootstrap 3.1 Introduction The estimation of parameters in probability distributions is a basic problem in statistics that one tends to encounter already during the very first course on the subject.

More information

Modified S-Control Chart for Specified value of Cp

Modified S-Control Chart for Specified value of Cp American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 38-349, ISSN (Online): 38-358, ISSN (CD-ROM): 38-369

More information

Hellenic Complex Systems Laboratory

Hellenic Complex Systems Laboratory Hellenic Complex Systems Laboratory Technical Report No IV Calculation of the confidence bounds for the fraction nonconforming of normal populations of measurements in clinical laboratory medicine Aristides

More information

Cpk: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc.

Cpk: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc. C: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc. C is one of many capability metrics that are available. When capability metrics are used, organizations typically provide

More information

Chapters 5-6: Statistical Inference Methods

Chapters 5-6: Statistical Inference Methods Chapters 5-6: Statistical Inference Methods Chapter 5: Estimation (of population parameters) Ex. Based on GSS data, we re 95% confident that the population mean of the variable LONELY (no. of days in past

More information

Descriptive Statistics, Standard Deviation and Standard Error

Descriptive Statistics, Standard Deviation and Standard Error AP Biology Calculations: Descriptive Statistics, Standard Deviation and Standard Error SBI4UP The Scientific Method & Experimental Design Scientific method is used to explore observations and answer questions.

More information

On the Parameter Estimation of the Generalized Exponential Distribution Under Progressive Type-I Interval Censoring Scheme

On the Parameter Estimation of the Generalized Exponential Distribution Under Progressive Type-I Interval Censoring Scheme arxiv:1811.06857v1 [math.st] 16 Nov 2018 On the Parameter Estimation of the Generalized Exponential Distribution Under Progressive Type-I Interval Censoring Scheme Mahdi Teimouri Email: teimouri@aut.ac.ir

More information

Assessing the Quality of the Natural Cubic Spline Approximation

Assessing the Quality of the Natural Cubic Spline Approximation Assessing the Quality of the Natural Cubic Spline Approximation AHMET SEZER ANADOLU UNIVERSITY Department of Statisticss Yunus Emre Kampusu Eskisehir TURKEY ahsst12@yahoo.com Abstract: In large samples,

More information

A heuristic approach of the estimation of process capability indices for non-normal process data using the Burr XII distribution

A heuristic approach of the estimation of process capability indices for non-normal process data using the Burr XII distribution Noname manuscript No. (will be inserted by the editor) A heuristic approach of the estimation of process capability indices for non-normal process data using the Burr XII distribution Andrea Molina-Alonso

More information

Statistical Techniques for Validation Sampling. Copyright GCI, Inc. 2016

Statistical Techniques for Validation Sampling. Copyright GCI, Inc. 2016 Statistical Techniques for Validation Sampling Tie Risk to Sampling Data Type Confidence Level Reliability and Risk Typical Performance Levels One-sided or two-sided spec Distribution (variables) Risk

More information

Pre-control and Some Simple Alternatives

Pre-control and Some Simple Alternatives Pre-control and Some Simple Alternatives Stefan H. Steiner Dept. of Statistics and Actuarial Sciences University of Waterloo Waterloo, N2L 3G1 Canada Pre-control, also called Stoplight control, is a quality

More information

A Modified Approach for Detection of Outliers

A Modified Approach for Detection of Outliers A Modified Approach for Detection of Outliers Iftikhar Hussain Adil Department of Economics School of Social Sciences and Humanities National University of Sciences and Technology Islamabad Iftikhar.adil@s3h.nust.edu.pk

More information

Chapter 6 Normal Probability Distributions

Chapter 6 Normal Probability Distributions Chapter 6 Normal Probability Distributions 6-1 Review and Preview 6-2 The Standard Normal Distribution 6-3 Applications of Normal Distributions 6-4 Sampling Distributions and Estimators 6-5 The Central

More information

Z-TEST / Z-STATISTIC: used to test hypotheses about. µ when the population standard deviation is unknown

Z-TEST / Z-STATISTIC: used to test hypotheses about. µ when the population standard deviation is unknown Z-TEST / Z-STATISTIC: used to test hypotheses about µ when the population standard deviation is known and population distribution is normal or sample size is large T-TEST / T-STATISTIC: used to test hypotheses

More information

Modelling and Quantitative Methods in Fisheries

Modelling and Quantitative Methods in Fisheries SUB Hamburg A/553843 Modelling and Quantitative Methods in Fisheries Second Edition Malcolm Haddon ( r oc) CRC Press \ y* J Taylor & Francis Croup Boca Raton London New York CRC Press is an imprint of

More information

1. Estimation equations for strip transect sampling, using notation consistent with that used to

1. Estimation equations for strip transect sampling, using notation consistent with that used to Web-based Supplementary Materials for Line Transect Methods for Plant Surveys by S.T. Buckland, D.L. Borchers, A. Johnston, P.A. Henrys and T.A. Marques Web Appendix A. Introduction In this on-line appendix,

More information

Kernel Density Estimation (KDE)

Kernel Density Estimation (KDE) Kernel Density Estimation (KDE) Previously, we ve seen how to use the histogram method to infer the probability density function (PDF) of a random variable (population) using a finite data sample. In this

More information

Process capability for a complete electronic product assembly

Process capability for a complete electronic product assembly Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 6-206 Process capability for a complete electronic product assembly Flavia Carvalho Resende fcr4827@rit.edu Follow

More information

Part One of this article (1) introduced the concept

Part One of this article (1) introduced the concept Establishing Acceptance Limits for Uniformity of Dosage Units: Part Two Pramote Cholayudth The concept of sampling distribution of acceptance value (AV) was introduced in Part One of this article series.

More information

(X 1:n η) 1 θ e 1. i=1. Using the traditional MLE derivation technique, the penalized MLEs for η and θ are: = n. (X i η) = 0. i=1 = 1.

(X 1:n η) 1 θ e 1. i=1. Using the traditional MLE derivation technique, the penalized MLEs for η and θ are: = n. (X i η) = 0. i=1 = 1. EXAMINING THE PERFORMANCE OF A CONTROL CHART FOR THE SHIFTED EXPONENTIAL DISTRIBUTION USING PENALIZED MAXIMUM LIKELIHOOD ESTIMATORS: A SIMULATION STUDY USING SAS Austin Brown, M.S., University of Northern

More information

A Modified Weibull Distribution

A Modified Weibull Distribution IEEE TRANSACTIONS ON RELIABILITY, VOL. 52, NO. 1, MARCH 2003 33 A Modified Weibull Distribution C. D. Lai, Min Xie, Senior Member, IEEE, D. N. P. Murthy, Member, IEEE Abstract A new lifetime distribution

More information

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242 Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242 Creation & Description of a Data Set * 4 Levels of Measurement * Nominal, ordinal, interval, ratio * Variable Types

More information

Downloaded from

Downloaded from UNIT 2 WHAT IS STATISTICS? Researchers deal with a large amount of data and have to draw dependable conclusions on the basis of data collected for the purpose. Statistics help the researchers in making

More information

Exponential Membership Functions in Fuzzy Goal Programming: A Computational Application to a Production Problem in the Textile Industry

Exponential Membership Functions in Fuzzy Goal Programming: A Computational Application to a Production Problem in the Textile Industry American Journal of Computational and Applied Mathematics 2015, 5(1): 1-6 DOI: 10.5923/j.ajcam.20150501.01 Exponential Membership Functions in Fuzzy Goal Programming: A Computational Application to a Production

More information

Chapter 2 Modeling Distributions of Data

Chapter 2 Modeling Distributions of Data Chapter 2 Modeling Distributions of Data Section 2.1 Describing Location in a Distribution Describing Location in a Distribution Learning Objectives After this section, you should be able to: FIND and

More information

Bootstrap Confidence Intervals for Regression Error Characteristic Curves Evaluating the Prediction Error of Software Cost Estimation Models

Bootstrap Confidence Intervals for Regression Error Characteristic Curves Evaluating the Prediction Error of Software Cost Estimation Models Bootstrap Confidence Intervals for Regression Error Characteristic Curves Evaluating the Prediction Error of Software Cost Estimation Models Nikolaos Mittas, Lefteris Angelis Department of Informatics,

More information

Multivariate Capability Analysis

Multivariate Capability Analysis Multivariate Capability Analysis Summary... 1 Data Input... 3 Analysis Summary... 4 Capability Plot... 5 Capability Indices... 6 Capability Ellipse... 7 Correlation Matrix... 8 Tests for Normality... 8

More information

Lecture 12. August 23, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University.

Lecture 12. August 23, Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University. Lecture 12 Department of Biostatistics Johns Hopkins Bloomberg School of Public Health Johns Hopkins University August 23, 2007 1 2 3 4 5 1 2 Introduce the bootstrap 3 the bootstrap algorithm 4 Example

More information

CHAPTER 1. Introduction. Statistics: Statistics is the science of collecting, organizing, analyzing, presenting and interpreting data.

CHAPTER 1. Introduction. Statistics: Statistics is the science of collecting, organizing, analyzing, presenting and interpreting data. 1 CHAPTER 1 Introduction Statistics: Statistics is the science of collecting, organizing, analyzing, presenting and interpreting data. Variable: Any characteristic of a person or thing that can be expressed

More information

Minitab detailed

Minitab detailed Minitab 18.1 - detailed ------------------------------------- ADDITIVE contact sales: 06172-5905-30 or minitab@additive-net.de ADDITIVE contact Technik/ Support/ Installation: 06172-5905-20 or support@additive-net.de

More information

BESTFIT, DISTRIBUTION FITTING SOFTWARE BY PALISADE CORPORATION

BESTFIT, DISTRIBUTION FITTING SOFTWARE BY PALISADE CORPORATION Proceedings of the 1996 Winter Simulation Conference ed. J. M. Charnes, D. J. Morrice, D. T. Brunner, and J. J. S\vain BESTFIT, DISTRIBUTION FITTING SOFTWARE BY PALISADE CORPORATION Linda lankauskas Sam

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 4,000 116,000 120M Open access books available International authors and editors Downloads Our

More information

TRACK MAINTENANCE STRATEGIES OPTIMISATION PROBLEM

TRACK MAINTENANCE STRATEGIES OPTIMISATION PROBLEM TRACK MAINTENANCE STRATEGIES OPTIMISATION PROBLEM Gregory A. Krug Dr. S Krug Consulting Service P.O.B. 44051 Tel-Aviv 61440, Israel Viig@Inter.Net.Il Janusz Madejski Silesian University Of Technology In

More information

Bootstrapping Methods

Bootstrapping Methods Bootstrapping Methods example of a Monte Carlo method these are one Monte Carlo statistical method some Bayesian statistical methods are Monte Carlo we can also simulate models using Monte Carlo methods

More information

COPULA MODELS FOR BIG DATA USING DATA SHUFFLING

COPULA MODELS FOR BIG DATA USING DATA SHUFFLING COPULA MODELS FOR BIG DATA USING DATA SHUFFLING Krish Muralidhar, Rathindra Sarathy Department of Marketing & Supply Chain Management, Price College of Business, University of Oklahoma, Norman OK 73019

More information

ECONOMIC DESIGN OF STATISTICAL PROCESS CONTROL USING PRINCIPAL COMPONENTS ANALYSIS AND THE SIMPLICIAL DEPTH RANK CONTROL CHART

ECONOMIC DESIGN OF STATISTICAL PROCESS CONTROL USING PRINCIPAL COMPONENTS ANALYSIS AND THE SIMPLICIAL DEPTH RANK CONTROL CHART ECONOMIC DESIGN OF STATISTICAL PROCESS CONTROL USING PRINCIPAL COMPONENTS ANALYSIS AND THE SIMPLICIAL DEPTH RANK CONTROL CHART Vadhana Jayathavaj Rangsit University, Thailand vadhana.j@rsu.ac.th Adisak

More information

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li International Conference on Applied Science and Engineering Innovation (ASEI 215) Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm Yinling Wang, Huacong Li School of Power and

More information

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Tool 1: Standards for Mathematical ent: Interpreting Functions CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Name of Reviewer School/District Date Name of Curriculum Materials:

More information

APPENDIX K MONTE CARLO SIMULATION

APPENDIX K MONTE CARLO SIMULATION APPENDIX K MONTE CARLO SIMULATION K-1. Introduction. Monte Carlo simulation is a method of reliability analysis that should be used only when the system to be analyzed becomes too complex for use of simpler

More information

CHAPTER 2 Modeling Distributions of Data

CHAPTER 2 Modeling Distributions of Data CHAPTER 2 Modeling Distributions of Data 2.2 Density Curves and Normal Distributions The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers Density Curves

More information

Nonparametric Estimation of Distribution Function using Bezier Curve

Nonparametric Estimation of Distribution Function using Bezier Curve Communications for Statistical Applications and Methods 2014, Vol. 21, No. 1, 105 114 DOI: http://dx.doi.org/10.5351/csam.2014.21.1.105 ISSN 2287-7843 Nonparametric Estimation of Distribution Function

More information

Manufacturing Signatures and CMM Sampling Strategies

Manufacturing Signatures and CMM Sampling Strategies Manufacturing Signatures and CMM Sampling Strategies Giovanni Moroni**, Wilma Polini*, Marco Rasella** **Dipartimento di Meccanica, Politecnico di Milano, piazzale Leonardo da Vinci 32, Milano, 20133,

More information

Generating random samples from user-defined distributions

Generating random samples from user-defined distributions The Stata Journal (2011) 11, Number 2, pp. 299 304 Generating random samples from user-defined distributions Katarína Lukácsy Central European University Budapest, Hungary lukacsy katarina@phd.ceu.hu Abstract.

More information

TRIANGULAR FUZZY MULTINOMIAL CONTROL CHART WITH VARIABLE SAMPLE SIZE USING α CUTS

TRIANGULAR FUZZY MULTINOMIAL CONTROL CHART WITH VARIABLE SAMPLE SIZE USING α CUTS TRIANGULAR FUZZY MULTINOMIAL CONTROL CHART WITH VARIABLE SAMPLE SIZE USING α CUTS S.Selva Arul Pandian Assistant Professor (Sr.) in Statistics, Department of Mathematics, K.S.R College of Engineering,

More information

Comparison of Methods for Analyzing and Interpreting Censored Exposure Data

Comparison of Methods for Analyzing and Interpreting Censored Exposure Data Comparison of Methods for Analyzing and Interpreting Censored Exposure Data Paul Hewett Ph.D. CIH Exposure Assessment Solutions, Inc. Gary H. Ganser Ph.D. West Virginia University Comparison of Methods

More information

What s New in Oracle Crystal Ball? What s New in Version Browse to:

What s New in Oracle Crystal Ball? What s New in Version Browse to: What s New in Oracle Crystal Ball? Browse to: - What s new in version 11.1.1.0.00 - What s new in version 7.3 - What s new in version 7.2 - What s new in version 7.1 - What s new in version 7.0 - What

More information

Bayesian Estimation for Skew Normal Distributions Using Data Augmentation

Bayesian Estimation for Skew Normal Distributions Using Data Augmentation The Korean Communications in Statistics Vol. 12 No. 2, 2005 pp. 323-333 Bayesian Estimation for Skew Normal Distributions Using Data Augmentation Hea-Jung Kim 1) Abstract In this paper, we develop a MCMC

More information

Response to API 1163 and Its Impact on Pipeline Integrity Management

Response to API 1163 and Its Impact on Pipeline Integrity Management ECNDT 2 - Tu.2.7.1 Response to API 3 and Its Impact on Pipeline Integrity Management Munendra S TOMAR, Martin FINGERHUT; RTD Quality Services, USA Abstract. Knowing the accuracy and reliability of ILI

More information

ACCURACY AND EFFICIENCY OF MONTE CARLO METHOD. Julius Goodman. Bechtel Power Corporation E. Imperial Hwy. Norwalk, CA 90650, U.S.A.

ACCURACY AND EFFICIENCY OF MONTE CARLO METHOD. Julius Goodman. Bechtel Power Corporation E. Imperial Hwy. Norwalk, CA 90650, U.S.A. - 430 - ACCURACY AND EFFICIENCY OF MONTE CARLO METHOD Julius Goodman Bechtel Power Corporation 12400 E. Imperial Hwy. Norwalk, CA 90650, U.S.A. ABSTRACT The accuracy of Monte Carlo method of simulating

More information

A noninformative Bayesian approach to small area estimation

A noninformative Bayesian approach to small area estimation A noninformative Bayesian approach to small area estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu September 2001 Revised May 2002 Research supported

More information

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. - Statistical Methods -

Physics 736. Experimental Methods in Nuclear-, Particle-, and Astrophysics. - Statistical Methods - Physics 736 Experimental Methods in Nuclear-, Particle-, and Astrophysics - Statistical Methods - Karsten Heeger heeger@wisc.edu Course Schedule and Reading course website http://neutrino.physics.wisc.edu/teaching/phys736/

More information

Improving the Post-Smoothing of Test Norms with Kernel Smoothing

Improving the Post-Smoothing of Test Norms with Kernel Smoothing Improving the Post-Smoothing of Test Norms with Kernel Smoothing Anli Lin Qing Yi Michael J. Young Pearson Paper presented at the Annual Meeting of National Council on Measurement in Education, May 1-3,

More information

TERTIARY INSTITUTIONS SERVICE CENTRE (Incorporated in Western Australia)

TERTIARY INSTITUTIONS SERVICE CENTRE (Incorporated in Western Australia) TERTIARY INSTITUTIONS SERVICE CENTRE (Incorporated in Western Australia) Royal Street East Perth, Western Australia 6004 Telephone (08) 9318 8000 Facsimile (08) 9225 7050 http://www.tisc.edu.au/ THE AUSTRALIAN

More information

Probability Models.S4 Simulating Random Variables

Probability Models.S4 Simulating Random Variables Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard Probability Models.S4 Simulating Random Variables In the fashion of the last several sections, we will often create probability

More information

2.3. Quality Assurance: The activities that have to do with making sure that the quality of a product is what it should be.

2.3. Quality Assurance: The activities that have to do with making sure that the quality of a product is what it should be. 5.2. QUALITY CONTROL /QUALITY ASSURANCE 5.2.1. STATISTICS 1. ACKNOWLEDGEMENT This paper has been copied directly from the HMA Manual with a few modifications from the original version. The original version

More information

Dynamic Clustering of Data with Modified K-Means Algorithm

Dynamic Clustering of Data with Modified K-Means Algorithm 2012 International Conference on Information and Computer Networks (ICICN 2012) IPCSIT vol. 27 (2012) (2012) IACSIT Press, Singapore Dynamic Clustering of Data with Modified K-Means Algorithm Ahamed Shafeeq

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations

Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations Effective probabilistic stopping rules for randomized metaheuristics: GRASP implementations Celso C. Ribeiro Isabel Rosseti Reinaldo C. Souza Universidade Federal Fluminense, Brazil July 2012 1/45 Contents

More information

Optimal designs for comparing curves

Optimal designs for comparing curves Optimal designs for comparing curves Holger Dette, Ruhr-Universität Bochum Maria Konstantinou, Ruhr-Universität Bochum Kirsten Schorning, Ruhr-Universität Bochum FP7 HEALTH 2013-602552 Outline 1 Motivation

More information

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation Unit 5 SIMULATION THEORY Lesson 39 Learning objective: To learn random number generation. Methods of simulation. Monte Carlo method of simulation You ve already read basics of simulation now I will be

More information

Using Mplus Monte Carlo Simulations In Practice: A Note On Non-Normal Missing Data In Latent Variable Models

Using Mplus Monte Carlo Simulations In Practice: A Note On Non-Normal Missing Data In Latent Variable Models Using Mplus Monte Carlo Simulations In Practice: A Note On Non-Normal Missing Data In Latent Variable Models Bengt Muth en University of California, Los Angeles Tihomir Asparouhov Muth en & Muth en Mplus

More information

PASS Sample Size Software. Randomization Lists

PASS Sample Size Software. Randomization Lists Chapter 880 Introduction This module is used to create a randomization list for assigning subjects to one of up to eight groups or treatments. Six randomization algorithms are available. Four of the algorithms

More information

CREATING THE DISTRIBUTION ANALYSIS

CREATING THE DISTRIBUTION ANALYSIS Chapter 12 Examining Distributions Chapter Table of Contents CREATING THE DISTRIBUTION ANALYSIS...176 BoxPlot...178 Histogram...180 Moments and Quantiles Tables...... 183 ADDING DENSITY ESTIMATES...184

More information

Development of a tool for the easy determination of control factor interaction in the Design of Experiments and the Taguchi Methods

Development of a tool for the easy determination of control factor interaction in the Design of Experiments and the Taguchi Methods Development of a tool for the easy determination of control factor interaction in the Design of Experiments and the Taguchi Methods IKUO TANABE Department of Mechanical Engineering, Nagaoka University

More information

Annotated multitree output

Annotated multitree output Annotated multitree output A simplified version of the two high-threshold (2HT) model, applied to two experimental conditions, is used as an example to illustrate the output provided by multitree (version

More information

Process Capability in the Six Sigma Environment

Process Capability in the Six Sigma Environment GE Research & Development Center Process Capability in the Six Sigma Environment C.L. Stanard 2001CRD119, July 2001 Class 1 Technical Information Series Copyright 2001 General Electric Company. All rights

More information

Lecture Series on Statistics -HSTC. Frequency Graphs " Dr. Bijaya Bhusan Nanda, Ph. D. (Stat.)

Lecture Series on Statistics -HSTC. Frequency Graphs  Dr. Bijaya Bhusan Nanda, Ph. D. (Stat.) Lecture Series on Statistics -HSTC Frequency Graphs " By Dr. Bijaya Bhusan Nanda, Ph. D. (Stat.) CONTENT Histogram Frequency polygon Smoothed frequency curve Cumulative frequency curve or ogives Learning

More information

Dealing with Categorical Data Types in a Designed Experiment

Dealing with Categorical Data Types in a Designed Experiment Dealing with Categorical Data Types in a Designed Experiment Part II: Sizing a Designed Experiment When Using a Binary Response Best Practice Authored by: Francisco Ortiz, PhD STAT T&E COE The goal of

More information

Ch6: The Normal Distribution

Ch6: The Normal Distribution Ch6: The Normal Distribution Introduction Review: A continuous random variable can assume any value between two endpoints. Many continuous random variables have an approximately normal distribution, which

More information

Continuous Improvement Toolkit. Normal Distribution. Continuous Improvement Toolkit.

Continuous Improvement Toolkit. Normal Distribution. Continuous Improvement Toolkit. Continuous Improvement Toolkit Normal Distribution The Continuous Improvement Map Managing Risk FMEA Understanding Performance** Check Sheets Data Collection PDPC RAID Log* Risk Analysis* Benchmarking***

More information

PARAMETRIC ESTIMATION OF CONSTRUCTION COST USING COMBINED BOOTSTRAP AND REGRESSION TECHNIQUE

PARAMETRIC ESTIMATION OF CONSTRUCTION COST USING COMBINED BOOTSTRAP AND REGRESSION TECHNIQUE INTERNATIONAL JOURNAL OF CIVIL ENGINEERING AND TECHNOLOGY (IJCIET) Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14) ISSN 0976 6308 (Print) ISSN 0976

More information

A New Combinatorial Design of Coded Distributed Computing

A New Combinatorial Design of Coded Distributed Computing A New Combinatorial Design of Coded Distributed Computing Nicholas Woolsey, Rong-Rong Chen, and Mingyue Ji Department of Electrical and Computer Engineering, University of Utah Salt Lake City, UT, USA

More information

Vocabulary. 5-number summary Rule. Area principle. Bar chart. Boxplot. Categorical data condition. Categorical variable.

Vocabulary. 5-number summary Rule. Area principle. Bar chart. Boxplot. Categorical data condition. Categorical variable. 5-number summary 68-95-99.7 Rule Area principle Bar chart Bimodal Boxplot Case Categorical data Categorical variable Center Changing center and spread Conditional distribution Context Contingency table

More information

1. To condense data in a single value. 2. To facilitate comparisons between data.

1. To condense data in a single value. 2. To facilitate comparisons between data. The main objectives 1. To condense data in a single value. 2. To facilitate comparisons between data. Measures :- Locational (positional ) average Partition values Median Quartiles Deciles Percentiles

More information

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski Data Analysis and Solver Plugins for KSpread USER S MANUAL Tomasz Maliszewski tmaliszewski@wp.pl Table of Content CHAPTER 1: INTRODUCTION... 3 1.1. ABOUT DATA ANALYSIS PLUGIN... 3 1.3. ABOUT SOLVER PLUGIN...

More information

2014 Stat-Ease, Inc. All Rights Reserved.

2014 Stat-Ease, Inc. All Rights Reserved. What s New in Design-Expert version 9 Factorial split plots (Two-Level, Multilevel, Optimal) Definitive Screening and Single Factor designs Journal Feature Design layout Graph Columns Design Evaluation

More information

Removing Subjectivity from the Assessment of Critical Process Parameters and Their Impact

Removing Subjectivity from the Assessment of Critical Process Parameters and Their Impact Peer-Reviewed Removing Subjectivity from the Assessment of Critical Process Parameters and Their Impact Fasheng Li, Brad Evans, Fangfang Liu, Jingnan Zhang, Ke Wang, and Aili Cheng D etermining critical

More information

Process Capability Assessment for Univariate. and Multivariate Non-normal Correlated. Quality Characteristics. Shafiq Ahmad

Process Capability Assessment for Univariate. and Multivariate Non-normal Correlated. Quality Characteristics. Shafiq Ahmad Process Capability Assessment for Univariate and Multivariate Non-normal Correlated Quality Characteristics By Shafiq Ahmad Master of Engineering Asian Institute of Technology, Bangkok, Thailand & Technical

More information

Lecture 3: Chapter 3

Lecture 3: Chapter 3 Lecture 3: Chapter 3 C C Moxley UAB Mathematics 12 September 16 3.2 Measurements of Center Statistics involves describing data sets and inferring things about them. The first step in understanding a set

More information

Data Mining. ❷Chapter 2 Basic Statistics. Asso.Prof.Dr. Xiao-dong Zhu. Business School, University of Shanghai for Science & Technology

Data Mining. ❷Chapter 2 Basic Statistics. Asso.Prof.Dr. Xiao-dong Zhu. Business School, University of Shanghai for Science & Technology ❷Chapter 2 Basic Statistics Business School, University of Shanghai for Science & Technology 2016-2017 2nd Semester, Spring2017 Contents of chapter 1 1 recording data using computers 2 3 4 5 6 some famous

More information

An Optimal Gamma Correction Based Image Contrast Enhancement Using DWT-SVD

An Optimal Gamma Correction Based Image Contrast Enhancement Using DWT-SVD An Optimal Gamma Correction Based Image Contrast Enhancement Using DWT-SVD G. Padma Priya 1, T. Venkateswarlu 2 Department of ECE 1,2, SV University College of Engineering 1,2 Email: padmapriyagt@gmail.com

More information

Fitting Fragility Functions to Structural Analysis Data Using Maximum Likelihood Estimation

Fitting Fragility Functions to Structural Analysis Data Using Maximum Likelihood Estimation Fitting Fragility Functions to Structural Analysis Data Using Maximum Likelihood Estimation 1. Introduction This appendix describes a statistical procedure for fitting fragility functions to structural

More information

Integration of Fuzzy Shannon s Entropy with fuzzy TOPSIS for industrial robotic system selection

Integration of Fuzzy Shannon s Entropy with fuzzy TOPSIS for industrial robotic system selection JIEM, 2012 5(1):102-114 Online ISSN: 2013-0953 Print ISSN: 2013-8423 http://dx.doi.org/10.3926/jiem.397 Integration of Fuzzy Shannon s Entropy with fuzzy TOPSIS for industrial robotic system selection

More information

Computing Optimal Strata Bounds Using Dynamic Programming

Computing Optimal Strata Bounds Using Dynamic Programming Computing Optimal Strata Bounds Using Dynamic Programming Eric Miller Summit Consulting, LLC 7/27/2012 1 / 19 Motivation Sampling can be costly. Sample size is often chosen so that point estimates achieve

More information

Laplace Transform of a Lognormal Random Variable

Laplace Transform of a Lognormal Random Variable Approximations of the Laplace Transform of a Lognormal Random Variable Joint work with Søren Asmussen & Jens Ledet Jensen The University of Queensland School of Mathematics and Physics August 1, 2011 Conference

More information

MONTE CARLO SIMULATIONS OF PRODUCTION COSTS IN DISCRETE MANUFACTURING. Kathrine Spang, Christina Windmark, Jan-Eric Ståhl

MONTE CARLO SIMULATIONS OF PRODUCTION COSTS IN DISCRETE MANUFACTURING. Kathrine Spang, Christina Windmark, Jan-Eric Ståhl MONTE CARLO SIMULATIONS OF PRODUCTION COSTS IN DISCRETE MANUFACTURING Kathrine Spang, Christina Windmark, Jan-Eric Ståhl Lund University, Division of Production and Materials Engineering, Lund, Sweden

More information

SIMULATION OF ARTIFICIAL SYSTEMS BEHAVIOR IN PARAMETRIC EIGHT-DIMENSIONAL SPACE

SIMULATION OF ARTIFICIAL SYSTEMS BEHAVIOR IN PARAMETRIC EIGHT-DIMENSIONAL SPACE 78 Proceedings of the 4 th International Conference on Informatics and Information Technology SIMULATION OF ARTIFICIAL SYSTEMS BEHAVIOR IN PARAMETRIC EIGHT-DIMENSIONAL SPACE D. Ulbikiene, J. Ulbikas, K.

More information

AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS

AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS Emre Alpman Graduate Research Assistant Aerospace Engineering Department Pennstate University University Park, PA, 6802 Abstract A new methodology

More information

Statistics: Normal Distribution, Sampling, Function Fitting & Regression Analysis (Grade 12) *

Statistics: Normal Distribution, Sampling, Function Fitting & Regression Analysis (Grade 12) * OpenStax-CNX module: m39305 1 Statistics: Normal Distribution, Sampling, Function Fitting & Regression Analysis (Grade 12) * Free High School Science Texts Project This work is produced by OpenStax-CNX

More information

Research on Design and Application of Computer Database Quality Evaluation Model

Research on Design and Application of Computer Database Quality Evaluation Model Research on Design and Application of Computer Database Quality Evaluation Model Abstract Hong Li, Hui Ge Shihezi Radio and TV University, Shihezi 832000, China Computer data quality evaluation is the

More information

Performance of Latent Growth Curve Models with Binary Variables

Performance of Latent Growth Curve Models with Binary Variables Performance of Latent Growth Curve Models with Binary Variables Jason T. Newsom & Nicholas A. Smith Department of Psychology Portland State University 1 Goal Examine estimation of latent growth curve models

More information

More Summer Program t-shirts

More Summer Program t-shirts ICPSR Blalock Lectures, 2003 Bootstrap Resampling Robert Stine Lecture 2 Exploring the Bootstrap Questions from Lecture 1 Review of ideas, notes from Lecture 1 - sample-to-sample variation - resampling

More information

An Introduction to the Bootstrap

An Introduction to the Bootstrap An Introduction to the Bootstrap Bradley Efron Department of Statistics Stanford University and Robert J. Tibshirani Department of Preventative Medicine and Biostatistics and Department of Statistics,

More information

The Comparative Study of Machine Learning Algorithms in Text Data Classification*

The Comparative Study of Machine Learning Algorithms in Text Data Classification* The Comparative Study of Machine Learning Algorithms in Text Data Classification* Wang Xin School of Science, Beijing Information Science and Technology University Beijing, China Abstract Classification

More information

IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 1 Issue 3, May

IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 1 Issue 3, May Optimization of fuzzy assignment model with triangular fuzzy numbers using Robust Ranking technique Dr. K. Kalaiarasi 1,Prof. S.Sindhu 2, Dr. M. Arunadevi 3 1 Associate Professor Dept. of Mathematics 2

More information