Notes on Simulations in SAS Studio

Similar documents
Macros and ODS. SAS Programming November 6, / 89

humor... May 3, / 56

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea

STA 570 Spring Lecture 5 Tuesday, Feb 1

3 Graphical Displays of Data

Things you ll know (or know better to watch out for!) when you leave in December: 1. What you can and cannot infer from graphs.

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time

The Bootstrap and Jackknife

3 Graphical Displays of Data

More Summer Program t-shirts

Prepare a stem-and-leaf graph for the following data. In your final display, you should arrange the leaves for each stem in increasing order.

Chapter 6: DESCRIPTIVE STATISTICS

Math 214 Introductory Statistics Summer Class Notes Sections 3.2, : 1-21 odd 3.3: 7-13, Measures of Central Tendency

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242

Averages and Variation

Chapters 5-6: Statistical Inference Methods

Lecture 3: Linear Classification

Descriptive Statistics, Standard Deviation and Standard Error

Chapter 3 Analyzing Normal Quantitative Data

1.7 Limit of a Function

Confidence Intervals. Dennis Sun Data 301

Outline. Topic 16 - Other Remedies. Ridge Regression. Ridge Regression. Ridge Regression. Robust Regression. Regression Trees. Piecewise Linear Model

Lecture 3: Chapter 3

Week 4: Describing data and estimation

Earthquake data in geonet.org.nz

Data can be in the form of numbers, words, measurements, observations or even just descriptions of things.

Bootstrapping Method for 14 June 2016 R. Russell Rhinehart. Bootstrapping

MS&E 226: Small Data

Math 120 Introduction to Statistics Mr. Toner s Lecture Notes 3.1 Measures of Central Tendency

1. The Normal Distribution, continued

Bootstrapping Methods

Things to get out of conferences/workshops/minicourses

Fall 09, Homework 5

Assignment 5.5. Nothing here to hand in

CHAPTER 3: Data Description

STP 226 ELEMENTARY STATISTICS NOTES PART 2 - DESCRIPTIVE STATISTICS CHAPTER 3 DESCRIPTIVE MEASURES

Chapter 3: Describing, Exploring & Comparing Data

Section 0.3 The Order of Operations

CHAPTER 2: DESCRIPTIVE STATISTICS Lecture Notes for Introductory Statistics 1. Daphne Skipper, Augusta University (2016)

An introduction to plotting data

Missing Data Analysis for the Employee Dataset

Unit 1 Review of BIOSTATS 540 Practice Problems SOLUTIONS - Stata Users

10.4 Linear interpolation method Newton s method

Chapter 1. Math review. 1.1 Some sets

Section 4 General Factorial Tutorials

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Sorting lower bound and Linear-time sorting Date: 9/19/17

Unit 5: Estimating with Confidence

Vocabulary. 5-number summary Rule. Area principle. Bar chart. Boxplot. Categorical data condition. Categorical variable.

Epidemic spreading on networks

STA Module 4 The Normal Distribution

STA /25/12. Module 4 The Normal Distribution. Learning Objectives. Let s Look at Some Examples of Normal Curves

MATH 1070 Introductory Statistics Lecture notes Descriptive Statistics and Graphical Representation

Chapter 2 The SAS Environment

Intro. Scheme Basics. scm> 5 5. scm>

CHAPTER 2 DESCRIPTIVE STATISTICS

To calculate the arithmetic mean, sum all the values and divide by n (equivalently, multiple 1/n): 1 n. = 29 years.

STA Rev. F Learning Objectives. Learning Objectives (Cont.) Module 3 Descriptive Measures

1 Overview of Statistics; Essential Vocabulary

appstats6.notebook September 27, 2016

Today s Topics. Percentile ranks and percentiles. Standardized scores. Using standardized scores to estimate percentiles

Economics Nonparametric Econometrics

Lecture Notes 3: Data summarization

One way ANOVA when the data are not normally distributed (The Kruskal-Wallis test).

Bootstrap confidence intervals Class 24, Jeremy Orloff and Jonathan Bloom

CHAPTER 2 Modeling Distributions of Data

Table of Contents (As covered from textbook)

STAT 113: Lab 9. Colin Reimer Dawson. Last revised November 10, 2015

CHAPTER 1. Introduction. Statistics: Statistics is the science of collecting, organizing, analyzing, presenting and interpreting data.

The first few questions on this worksheet will deal with measures of central tendency. These data types tell us where the center of the data set lies.

STATS PAD USER MANUAL

Administrivia. Next Monday is Thanksgiving holiday. Tuesday and Wednesday the lab will be open for make-up labs. Lecture as usual on Thursday.

Title. Description. Menu. Remarks and examples. stata.com. stata.com. PSS Control Panel

Depending on the computer you find yourself in front of, here s what you ll need to do to open SPSS.

Lecture 1: Overview

Getting to Know Your Data

BIOS: 4120 Lab 11 Answers April 3-4, 2018

Chapter 12: Statistics

Chapter 2 Describing, Exploring, and Comparing Data

Understanding Recursion

+ Statistical Methods in

CREATING THE DISTRIBUTION ANALYSIS

6.001 Notes: Section 4.1

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski

Lab #9: ANOVA and TUKEY tests

To complete the computer assignments, you ll use the EViews software installed on the lab PCs in WMC 2502 and WMC 2506.

An Introduction to Markov Chain Monte Carlo

Section 2.3: Simple Linear Regression: Predictions and Inference

The first thing we ll need is some numbers. I m going to use the set of times and drug concentration levels in a patient s bloodstream given below.

Fractional. Design of Experiments. Overview. Scenario

KS4 3 Year scheme of Work Year 10 Higher

Introduction to Counting, Some Basic Principles

1. Select[] is used to select items meeting a specified criterion.

Performance Evaluation

Using Large Data Sets Workbook Version A (MEI)

Predictive Analysis: Evaluation and Experimentation. Heejun Kim

5b. Descriptive Statistics - Part II

Optimization and least squares. Prof. Noah Snavely CS1114

CHAPTER 2 Modeling Distributions of Data

Detecting Polytomous Items That Have Drifted: Using Global Versus Step Difficulty 1,2. Xi Wang and Ronald K. Hambleton

14.1 Encoding for different models of computation

Transcription:

Notes on Simulations in SAS Studio If you are not careful about simulations in SAS Studio, you can run into problems. In particular, SAS Studio has a limited amount of memory that you can use to write to RESULTS tab (where results are normally displayed). If you are doing many t-tests, for example, then it takes a fair bit of memory and is likely to have trouble. It helps enormously to do something like this ods output TTests=pvalues; ods select TTests; proc ttest data=sim; by iter n; value x; run; The ODS SELECT statement reduces the output and increases the speed and number of iterations you can do. In SAS Studio, it can make the difference between your code working and not working. SAS Programming November 13, 2014 1 / 63

Power: comparing methods Here s an example from an empirical paper, SAS Programming November 13, 2014 2 / 63

Power: comparing methods SAS Programming November 13, 2014 3 / 63

Speed: comparing methods For large analyses, speed and/or memory might be an issue for choosing between methods and/or algorithms. This paper compared using different methods within SAS based on speed for doing permutation tests. SAS Programming November 13, 2014 4 / 63

Use of macros for simulations The author of the previous paper provides an appendix with lengthy macros to use as more efficient substitutes to use as replacements for SAS procedures such as PROC NPAR1WAY and PROC MULTTEST, which from his data could crash or not terminate in a reasonable time. In addition to developing your own macros, a common use of macros is to use macros written by someone else that have not been incorporated into the SAS language. You might just copy and paste the macro into your code, possibly with some modification, and you can use the macro even if you cannot understand it. Popular macros might eventually get replaced by new PROCs or new functionality within SAS. This is sort of the SAS alternative to user-defined packages in R. SAS Programming November 13, 2014 5 / 63

From Macro to PROC An example of an evolution from macros to PROCS is for bootstrapping. For several years, to perform bootstrapping, SAS users relied on macros often written by others to do the bootstrapping. In bootstrapping, you sample you data (or the rows of your data set) with replacement and get a new dataset with the same sample size but some of the values repeated and others omitted. For example if your data is -3-2 0 1 2 5 6 9 bootstrap replicated datas set might be -2-2 1 5 6 9 9 9-3 0 1 1 2 5 5 6 etc. SAS Programming November 13, 2014 6 / 63

From Macro to Proc Basically to generate the bootstrap data set, you generate random n random numbers from 1 to n, with replacement, and extract those values from your data. This was done using macros, but now can be done with PROC SURVEYSELECT. If you search on the web for bootstrapping, you still might run into one of those old macros. Newer methods might still be implemented using macros. A webpage from 2012 has a macro for Bootstrap bagging, a method of averaging results from multiple classification algorithms. http://statcompute.wordpress.com/2012/07/14/a-sas-macro-for-bootstrap-aggregating-bagging/ There are also macros for searching the web to download movie reviews or extract data from social media. Try searching on SAS macro 2013 for interesting examples. SAS Programming November 13, 2014 7 / 63

Bootstrapping with PROC SURVEYSELECT SAS Programming November 13, 2014 8 / 63

samprate is the fraction of the sample you want, which here is 1 for 100% (i.e., we want the same original sample size). outhits gives the number of times each observation is selected, which isn t necessary, but interesting to observe. rep is the number of bootstrap datasets, which in this case is set to 1000, which is a typical number. I ve also seen 100 used a lot for genetics examples that require time-consuming maximum likelihood approaches. SAS Programming November 13, 2014 9 / 63 Bootstrapping with PROC SURVEYSELECT To explain the syntax, everything here is an option. There are no statements within the procedure, which is why there is only one semicolon before the RUN statement. We create an output dataset by whatever name we want, here outboot. seed is a random number seed. method refers to the type of sampling, which for the bootstrap should be sampling with replacement. If you sampled without replacement, you d be permuting your observations, but this would have no effect on the mean, median, etc.

Bootstrapping with PROC SURVEYSELECT Opening the outboot dataset. SAS Programming November 13, 2014 10 / 63

Bootstrapping with PROC SURVEYSELECT Things to note: the number of men versus women is a random variable in the replicated datasets. It is not fixed to be the same as the original data set, but will be the same on average. However, the data set seems to be sorted by sex. the number of times an observation is repeated is in the column NumberHits. If this number is 4, for example, the same row occurs four times in a row. If an observation is selected 0 times, it doesn t show up in this column. The replicate is indicated in a column called Replicate. This is similar to the structure of the data sets we used to simulate power analyses, with (conceptually) multiple datasets simulated within a single SAS dataset. SAS Programming November 13, 2014 11 / 63

Bootstrapping with PROC SURVEYSELECT The idea behind bootstrapping is that we can use the simulated data sets to get a simulated distribution of sample statistics: sample means, sample standard deviations, sample medians, sample coefficient of variation (s/x), interquartile range, etc. We have theory to tell us the distribution of X, which is normal in most cases with large sample sizes. The distribution of the sample median, the 95th percentile, the range, and so forth is more difficult theoretically and will depend on the underlying distribution, so bootstrapping can be useful for this purpose. SAS Programming November 13, 2014 12 / 63

Bootstrapping The idea behind bootstrapping is that if we don t know what the underlying population is, then our sample is our best guess at what the underlying population is. The idea then is to draw samples from our initial sample as if we were drawing multiple samples from a population. This should work well if our sample is representative of the population we are making inferences about. When shouldn t this work well? If the sample size is too small, then the sample won t do a good job of representing the entire population, particular the extremes of a distribution. Bootstrap samples don t extrapolate beyond the original sample, so a sample of 100 observations might not do a good job of estimating the 99th percentile or even 95th percentile of a distribution. A sample that is biased will also not be corrected by using a bootstrap. SAS Programming November 13, 2014 13 / 63

Bootstrapping We can now think about how to use the bootstrapped data that PROC SURVEYSELECT creates. Suppose we want to estimate the population median. A reasonable guess for the population median is the sample median, assuming that we don t know anything about the distribution. (Is this always the case? No what is the best guess for the population median when sampling from a normal distribution?) The more interesting applications of bootstrapping is to get some form of confidence interval around your estimate or estimated standard error. SAS Programming November 13, 2014 14 / 63

Bootstrapping confidence intervals: percentiles SAS Programming November 13, 2014 15 / 63

Bootstrapping confidence intervals: percentiles SAS Programming November 13, 2014 16 / 63

Bootstrapping confidence intervals: percentiles PROC UNIVARIATE gives enough information to give a 90% bootstrap interval and a 98% interval, but for the 95% interval, we need the 2.5% and 97.5% quantiles. The 90% interval is (98.2,98.4) for the median, which is pretty narrow. The 98% interval is (98.1,98.5). To get the 95% interval we need to do a little more work to get customized percentiles out of PROC UNIVARIATE (using the output with options shown), or we can generate them a different way. SAS Programming November 13, 2014 17 / 63

Customized percentiles in PROC UNIVARIATE The 95% interval is (98.15,98.4). Note that the 90% and 98% intervals were symmetric around the sample median of 98.3 but the 95% interval was not. SAS Programming November 13, 2014 18 / 63

Getting the percentiles by sorting Another way to get the percentiles is to sort the 1000 replicates and get the 25th and 976th ordered observations. SAS Programming November 13, 2014 19 / 63

What are the percentiles? It s a little tricky to get the right percentiles. Should it be observation 25 or 26, 975 or 976 or 974? My reasoning was that I wanted to get the middle 950 observations, so that there were 25 observations to the left of my interval and 25 observations to the right of my interval. The exact number people use varies, though, so sometimes you ll see people use the 25th and 975th observations in the sorted data. This usually won t make much difference. SAS Programming November 13, 2014 20 / 63

What are percentiles? There s a nice function in R to help you find the percentiles. It interpolates between numbers though and has 9 different algorithms (you can specify which) to define the quantile. The R function apparently includes the SAS interpretation as one of the types. Here are some examples > x <- 1:1000 > quantile(x,.025) 2.5\% 25.975 > quantile(x,.975) 97.5\% 975.025 > quantile(x,.025,type=3) # type 3 is for SAS 2.5% 25 > quantile(x,.975,type=3) 97.5% SAS Programming November 13, 2014 21 / 63

Interpreting Bootstrap CIs Interpreting Bootstrap CIs is a little unclear. Is it a probability? It is easiest to think of a bootstrap CI as a plausible range of values for the parameter. Of course the same might be said of frequentist CIs. If I say that a 95% CI (based on the theory for normal distributions) for µ is (1.3, 2.1), this does NOT mean that there is a 95% chance that µ is between 1.3 and 2.1 since this would be treating µ as a random variable rather than a parameter. SAS Programming November 13, 2014 22 / 63

Interpreting Bootstrap CIs What we hope to be the case for a frequentist CI is that 95% of the time, a 95% CI captures the population mean. The idea is that if there are many samples, then before you look at the data, you expect 95% of the CIs constructed from the different samples to capture the population mean. As an example, if there are multiple polls for the proportion of people who support say, Hilary Clinton for the next presidential election conducted by CNN, ABC, Fox News, MSNBC, The New York Times, etc., then hopefully 95% of those polls will have the true percentage in their confidence intervals, assuming those polls are independent. On the frequentist way of looking at things, once the intervals have been constructed, any individual poll either captures this true proportion or it does not. But probability statements don t make sense unless the proportion is a random variable. SAS Programming November 13, 2014 23 / 63

Interpreting Bootstrap CIs Bootstrap CIs are similar to frequentist CIs in this respect that if the sample is representative, then approximately (1 α)% of the time, they capture the parameter value they are estimating. SAS Programming November 13, 2014 24 / 63

Interpreting CIs How can we test how well a confidence interval (of any type) does? One thing we can do is estimate it s coverage probability. That is, we generate many samples, construct a CI for each of them, and test how often it covers the true parameter. A well constructed 95% confidence interval should cover the true parameter 95% of the time. For examples where you reject H 0 : µ = µ 0 if and only if the confidence interval include µ 0, the coverage probability is the flip side of thinking about the type 1 error. If the confidence interval include µ 0 95% of the time, then you reject the H 0 : µ = µ 0 exactly 5% of the time. SAS Programming November 13, 2014 25 / 63

Bootstrap estimate of the standard error The bootstrap estimate of the standard error is obtained by by taking the sample variance of your test statistic, where the sample variance is computed across bootstrap replicates. For example, if my bootstrap median values are m 1, m 2,..., m B where B = 1000 is the number of replicates. Then the bootstrap estimate of the standard error of the sample median is B (m i m) 2 i=1 where m is the arithmetic average of the sample medians. SAS Programming November 13, 2014 26 / 63

Bootstrap standard error SAS Programming November 13, 2014 27 / 63

Bootstrap standard error This is the outboot data set and the first PROC MEANS output. SAS Programming November 13, 2014 28 / 63

Bootstrap standard error This is the meanboot3 data set and second PROC MEANS output. meanboot3 SAS Programming November 13, 2014 29 / 63

Bootstrap standard error CI From this estimate of the standard error, we can construct a 95% interval, based on using 98.3 ± 1.96(.08619) = (98.13, 98.47) which is similar to the percentile-based estimator but slightly larger. In my experience I have mostly seen the percentile-based interval used for bootstrap CIs. SAS Programming November 13, 2014 30 / 63

Hypothesis Testing You can also use a bootrapping framework to do hypothesis testing. Suppose you want to test the difference in two medians for two populations. The null hypothesis is that the two populations have the same median, so H 0 : η 1 = η 2, while the alternative is H A : η 1 η 2. Using the bootstrap, we can estimate the standard errors for each group and the sample medians, m 1 and m 2 for each group. The standard error for the difference is the square root of the sum of the squared standard errors: se(m 1 m 2 ) = se(m 1 ) 2 + se(m 2 ) 2 Using bootstrap estimates of se(m 1 ) and se(m 2 ), this can be used to test whether the difference m 1 m 2 is significantly different from 0. SAS Programming November 13, 2014 31 / 63

Bootstrapping in R Perhaps not surprisingly, bootstrapping is a little easier in R, largely because of a function called sample(), which allows you to sample with or without replacement. Suppose my temperatures are in temperature. Then bootmedian <- 1:1000 for(b in 1:B) { bootmedian[i] <- median(sample(temperature,replace=t)) } bootmedian <- sort(bootmedian) ci <- c(bootmedian[26],bootmedian[975]) print(ci) This is is sufficient to generate your bootstrapped medians and the percentile interval. Of course you can also do sd(boot median) to get the bootstrap standard error. SAS Programming November 13, 2014 32 / 63

Why do we use the sample mean instead of the sample median? Suppose we are sampling from a symmetric distribution. Why do we use the sample mean instead of the sample median? Both are unbiased estimators of the center of the distribution. SAS Programming November 13, 2014 33 / 63

Why do we use the sample mean instead of the sample median? A basic answer is that for most distributions, the sample median is more variable than the sample mean, so for finite sample sizes, the mean is more precise. Since the variance of the sample median is difficult to determine, this could be investigated by simulation, either by simulating many samples from the same distribution and computing the standard deviations of the sample means and sample medians, or by using the bootstrap estimates of the standard errors if you are working with one sample. For the temperature data, it actually doesn t make much difference. A normal-based 95% confidence interval for the mean temperature is (98.12,98.38) (based on PROC TTEST), and the mean temperature is a bit lower than the median, being 98.249 instead of 98.3. SAS Programming November 13, 2014 34 / 63

What if there were no PROC SURVEYSELECT? PROC SURVEYSELECT was introduced in SAS version 6, and hasn t always been around. What would you do if it weren t available? Bootstrapping was invented in the 1970s, long before PROC SURVEYSELECT, and generally, statistical methods will be invented before there is a tidy SAS procedure for them. This is part of why being able to program can be important. In the past, people used macros to accomplish bootstrapping. How would you do this? First think about how you would generate one bootstrap replicate data set. SAS Programming November 13, 2014 35 / 63

Bootstrap by hand Here is one way to create a single bootstrap dataset. To create many, you could loop over this code with a macro. SAS Programming November 13, 2014 36 / 63

Bootstrapping with a macro In most cases, it is more efficient to create a giant dataset with all of your bootstrap replicates together, then summarize using a CLASS statement in PROC MEANS or some other PROC. However, if you had 1 million observations and wanted 1000 bootstrap replicates, this would create a dataset with 1 billion observations. With the macro approach, you can extract the information you need from each bootstrap data set (median, quantile, etc), then save that to a dataset, and rewrite your bootstrap data set, so that you never use more space than 1 million observations at a time. There is no need to save every bootstrap dataset. Sometimes there is a tradeoff between speed and memory. SAS Programming November 13, 2014 37 / 63

Bootstrapping and outliers One interesting thing to think about is what happens when you use bootstrapping and there is an outlier in your data? Ordinarily, we want our inferences to be good for the population we are sampling from, and not sensitive to the idiosyncrasies of the sample we happened to collect. In other words, if we collect a new sample from the same population, we d like our inferences to be stable. Bootstrapping simulates this idea of getting a slightly different sample from the same population. Some of the same values will be repeated, and some of them will be left out. This leads back to the question about outliers. Sometimes an outlier will be in your bootstrap replicate, sometimes it won t. If an outlier is seriously affecting your inferences, this can show up by your bootstrap inferences not being very stable. SAS Programming November 13, 2014 38 / 63

Bootstrapping and outliers What is the probability that an outlier is in one particular bootstrap replicate? Let s say you have a sample of size 5. The probability that the outlier is NOT chosen is (4/5)(4/5)(4/5)(4/5)(4/5), or ( ) 5 1 5. 5 What if the sample size is n? ( n 1 n ) n ( = 1 1 ) n n SAS Programming November 13, 2014 39 / 63

Bootstrapping and outliers What is this value for large n? ( lim 1 1 ) n = e 1 n n So for large n, the probability that the outlier IS in the dataset is approximately 1 e 1 = 0.632, a little less than two-thirds. SAS Programming November 13, 2014 40 / 63

Parametric bootstrapping The type of bootstrapping we ve been doing is often called nonparametric bootstrapping, whereas parametric bootstrapping involves simulating samples from a known distribution and calculating simulated test statistics from these samples. This is also a useful procedure. Often the parametric bootstrap is based on simulating from a distribution that is estimated from the data. Suppose you believe that your data is normally distributed, and you want to know what the distribution of the sample coefficient of variation should be from the population you sampled from. You could use the nonparametric bootstrap as we did for the median, or you could assume that your data is normal, use the mean and variance estimated from your data, and draw samples from a normal distribution to simulate the distribution of the coefficient of variation from this normal population with the same sample size as you obtained in your data. SAS Programming November 13, 2014 41 / 63

Parametric bootstrapping the coefficient of variation First, we ll just look at computing the coefficient of variation from the data in SAS for the temperature data. There are several ways to do this. We could use macro variables to store the mean and standard deviation using the output of PROC MEANS, or we can create a dataset that computes them. I ll use the second approach first, which will remind us how to use a RETAIN statement, but the macro variable approach is just as good. SAS Programming November 13, 2014 42 / 63

Computing the coefficient of variation SAS Programming November 13, 2014 43 / 63

Computing the coefficient of variation Note that the dataset cov just has one variable. SAS Programming November 13, 2014 44 / 63

Computing the coefficient of variation If we want to do a parametric bootstrap assuming that temperatures are normally distributed with the same mean and standard variation as we re observed in our data, then it would help to have the mean and standard deviation stored as macro variables so that we can we can use those values whenever we want, or we could just hard code it. SAS Programming November 13, 2014 45 / 63

Computing the coefficient of variation To use a macro variable, we use CALL SYMPUT. We used this in the notes for week 11, but that was probably easy to forget. (I forgot the syntax myself and had to look it up again...) First, we ll just modify the previous code to store the mean and standard deviation as macro variables. SAS Programming November 13, 2014 46 / 63

Computing the coefficient of variation Note that the dataset cov just has one variable. SAS Programming November 13, 2014 47 / 63

Simulating the coefficient of variation Next we want to simulate normally distributed samples of size 130 (the same size as our original data) from a normal distribution with mean 98.2492308 and standard deviation 0.7331832. Since we re using the actual values generated internally in SAS, we ll actually be using as many digits of precision as SAS uses. If you hard coded the values yourself, you d have to decide how many digits of precision you wanted to use for your parameters. SAS Programming November 13, 2014 48 / 63

Simulating the coefficient of variation Now that we have the mean and standard deviation saved, we can simulate from a normal distribution with those parameters. The point of doing this is to get an idea of how variable the sample coefficient of variation is when you have samples of size 130 from a normal with the same mean and variance as your data. If you simulate data from a normal distribution with this mean and standard deviation, and the coefficient of variation is never close to your sample coefficient of variation, this suggests that a normal distribution is not a good fit to your data. The parametric bootstrap is sometimes used as a kind of goodness-of-fit test. SAS Programming November 13, 2014 49 / 63

Simulating the coefficient of variation SAS Programming November 13, 2014 50 / 63

Simulating the coefficient of variation SAS Programming November 13, 2014 51 / 63

Simulating the coefficient of variation SAS Programming November 13, 2014 52 / 63

Simulating the coefficient of variation Where does our sample coefficient of variation lie on this plot? It is very close to the center, so it is not a surprising value for the sample c.o.v. for this normal distribution. This is probably not the most powerful test for whether or not the data is normal, but sometimes this kind of procedure suggests that the data are not consistent with a particular distribution. In any case, your main interest might lie in the c.o.v. rather than the normality of the data. We can highlight some things in the plot to make it more interesting. For example, we could add a point to the plot illustrating our sample c.o.v. and maybe a central 95% interval (a percentile interval) for the distribution of simulated coefficient of variation values. This is not really a confidence interval, but tells you where you expect the sample coefficient of variation to lie most of the time from this normal distribution. SAS Programming November 13, 2014 53 / 63

Simulating the coefficient of variation Here I just draw a refline at the sample c.o.v. Note that SAS Studio recognizes when I start to type a user-defined macro variable. SAS Programming November 13, 2014 54 / 63

Simulating the coefficient of variation In this case, the sample c.o.v. is near the center of the distribution of simulated c.o.v. values. This is not surprising since the original distribution was reasonably close to normal. In other cases, sample statistics are not necessarily near the average of their simulated distributions if the original distribution is different from the assumed distribution. SAS Programming November 13, 2014 55 / 63

Simulating the coefficient of variation In addition to seeing the distribution of the sample statistic, we can also get the standard error of the sample statistic, which is the standard deviation of s/x when n = 130. (A standard error is a standard deviation of a sample statistic). Here we just run PROC MEANS yet again. SAS Programming November 13, 2014 56 / 63

Parametric bootstrap estimate of the standard error The estimate of the standard error of the coefficient of variation is 0.00046. SAS Programming November 13, 2014 57 / 63

Simulation versus theory In some cases, theory can tell us what the standard error for some tricky statistic is. Sometimes we want standard errors for statistics that estimate odds, p/(1 p), odds ratios p 1/(1 p 1 ) p 2 /(1 p 2 ), precision 1/σ2, or other functions of parameters. If theory can give you a good answer, then this is often preferable to doing a simulation. However, theoretical expressions for this things often involve mathematical (if not numerical) approximations, so that a simulation, while approximate, might be just as good. The main disadvantage for simulation is often computation time and the fact that you need to do separate simulations for different parameter values. If I want the standard error for the c.o.v., I need to do separate simulations for different choices of n, µ and σ (assuming a normal distribution). If I have a function to give me the standard error, then I can quickly examine the effect of one or more parameters on the standard error as a function of the parameter(s). SAS Programming November 13, 2014 58 / 63

How much is bootstrapping used? SAS Programming November 13, 2014 59 / 63

How is bootstrapping used in phylogenetics? Bootstrapping is used in phylogenetics primary to help quantify uncertainty about maximum likelihood estimates. The idea is that bootstrap replicates are made of DNA sequences, and the best tree is constructed from these bootstrap replicates. Then the proportion of trees that have certain features in common is reported. This application is a bit different from the median example, because there is a discrete parameter being inferred. SAS Programming November 13, 2014 60 / 63

Non-parametric bootstrapping in phylogenetics SAS Programming November 13, 2014 61 / 63

Simulating the likelihood ratio statistic using parametric bootstrapping (Huelsenbeck and Bull, Systematic Biology, 1996) In this paper, the distribution of the likelihood ratio statistic δ = 2Λ is simulated under H 0 (in a case where it is not asymptotically χ 2 ) and compared to the observed likelihood ratio statistic. SAS Programming November 13, 2014 62 / 63

Parametric versus nonparametric bootstrapping In my experience, or in my area, nonparametric bootstrapping is used much more than parametric bootstrapping, although simulation from known distributions is used extensively that isn t called bootstrapping. We expect parametric bootstrapping to give more precise answers if we really know something about the distribution that the data comes from, and it is also useful in cases where we are testing a specific hypothesis. SAS Programming November 13, 2014 63 / 63