Information Criteria Methods in SAS for Multiple Linear Regression Models

Similar documents
Paper ST-157. Dennis J. Beal, Science Applications International Corporation, Oak Ridge, Tennessee

An Exact Implicit Enumeration Algorithm for Variable Selection in Multiple Linear Regression Models Using Information Criteria

STA121: Applied Regression Analysis

2017 ITRON EFG Meeting. Abdul Razack. Specialist, Load Forecasting NV Energy

Linear Model Selection and Regularization. especially usefull in high dimensions p>>100.

SAS Workshop. Introduction to SAS Programming. Iowa State University DAY 3 SESSION I

Discussion Notes 3 Stepwise Regression and Model Selection

Lecture 13: Model selection and regularization

Variable selection is intended to select the best subset of predictors. But why bother?

7. Collinearity and Model Selection

22s:152 Applied Linear Regression

Stat 401 B Lecture 26

MODEL DEVELOPMENT: VARIABLE SELECTION

22s:152 Applied Linear Regression

Chapter 28 Command Reference. Chapter Table of Contents TSVIEW COMMAND FORECAST COMMAND

Outline. Topic 16 - Other Remedies. Ridge Regression. Ridge Regression. Ridge Regression. Robust Regression. Regression Trees. Piecewise Linear Model

SAS/STAT 15.1 User s Guide The HPREG Procedure

Chapter 10: Variable Selection. November 12, 2018

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview

CPSC 340: Machine Learning and Data Mining. Feature Selection Fall 2017

Stat 5100 Handout #14.a SAS: Logistic Regression

Multivariate Analysis Multivariate Calibration part 2

Model selection Outline for today

2. Smoothing Binning

On Choosing the Right Coordinate Transformation Method

Cpk: What is its Capability? By: Rick Haynes, Master Black Belt Smarter Solutions, Inc.

Taking into account sampling variability of model selection indices: A parametric bootstrap approach. Sunthud Pornprasertmanit, Wei Wu, Todd D.

Chapter 9 Building the Regression Model I: Model Selection and Validation

Akaike information criterion).

Machine Learning. Topic 4: Linear Regression Models

Frequencies, Unequal Variance Weights, and Sampling Weights: Similarities and Differences in SAS

Polynomial Curve Fitting of Execution Time of Binary Search in Worst Case in Personal Computer

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY

BIOL 458 BIOMETRY Lab 10 - Multiple Regression

Topics in Machine Learning-EE 5359 Model Assessment and Selection

SAS/STAT 13.1 User s Guide. The REG Procedure

Analysis of Complex Survey Data with SAS

Feature Subset Selection for Logistic Regression via Mixed Integer Optimization

Assignment 6 - Model Building

Linear Methods for Regression and Shrinkage Methods

Chapter 6: Linear Model Selection and Regularization

SYS 6021 Linear Statistical Models

Chapter 15 Mixed Models. Chapter Table of Contents. Introduction Split Plot Experiment Clustered Data References...

A Study on Clustering Method by Self-Organizing Map and Information Criteria

(X 1:n η) 1 θ e 1. i=1. Using the traditional MLE derivation technique, the penalized MLEs for η and θ are: = n. (X i η) = 0. i=1 = 1.

BIOL 458 BIOMETRY Lab 10 - Multiple Regression

MINI-PAPER A Gentle Introduction to the Analysis of Sequential Data

Economics Bulletin, 2012, Vol. 32 No. 4 pp Introduction

SAS Workshop. Introduction to SAS Programming. Iowa State University DAY 2 SESSION IV

MONTE CARLO SIMULATIONS OF PRODUCTION COSTS IN DISCRETE MANUFACTURING. Kathrine Spang, Christina Windmark, Jan-Eric Ståhl

Exploring Econometric Model Selection Using Sensitivity Analysis

CS249: ADVANCED DATA MINING

NCSS Statistical Software

Bayesian Background Estimation

SLStats.notebook. January 12, Statistics:

Parameter Selection for EM Clustering Using Information Criterion and PDDP

Section 4 General Factorial Tutorials

Model Assessment and Selection. Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski

Package EBglmnet. January 30, 2016

CPSC 340: Machine Learning and Data Mining

K-Means Clustering Using Localized Histogram Analysis

2016 Stat-Ease, Inc. & CAMO Software

Optimizing Pharmaceutical Production Processes Using Quality by Design Methods

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION

REPLACING MLE WITH BAYESIAN SHRINKAGE CAS ANNUAL MEETING NOVEMBER 2018 GARY G. VENTER

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

Predictive Analytics: Demystifying Current and Emerging Methodologies. Tom Kolde, FCAS, MAAA Linda Brobeck, FCAS, MAAA

Package GWRM. R topics documented: July 31, Type Package

7. Decision or classification trees

Predictive Modeling with SAS Enterprise Miner

Hierarchical Mixture Models for Nested Data Structures

Sharp EL-9900 Graphing Calculator

Stat 500 lab notes c Philip M. Dixon, Week 10: Autocorrelated errors

An Interactive GUI Front-End for a Credit Scoring Modeling System

Middle School Math Course 2

2016 Stat-Ease, Inc. Taking Advantage of Automated Model Selection Tools for Response Surface Modeling

Learner Expectations UNIT 1: GRAPICAL AND NUMERIC REPRESENTATIONS OF DATA. Sept. Fathom Lab: Distributions and Best Methods of Display

The NESTED Procedure (Chapter)

MODEL SELECTION AND MODEL AVERAGING IN THE PRESENCE OF MISSING VALUES

Regression on SAT Scores of 374 High Schools and K-means on Clustering Schools

Package freeknotsplines

Machine Learning: An Applied Econometric Approach Online Appendix

Machine Learning (BSMC-GA 4439) Wenke Liu

Exercise: Graphing and Least Squares Fitting in Quattro Pro

Assignment No: 2. Assessment as per Schedule. Specifications Readability Assignments

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Using PC SAS/ASSIST* for Statistical Analyses

A Statistical Analysis of UK Financial Networks

Dual-Frame Weights (Landline and Cell) for the 2009 Minnesota Health Access Survey

CPSC 340: Machine Learning and Data Mining. Feature Selection Fall 2016

[/TTEST [PERCENT={5}] [{T }] [{DF } [{PROB }] [{COUNTS }] [{MEANS }]] {n} {NOT} {NODF} {NOPROB}] {NOCOUNTS} {NOMEANS}

The Time Series Forecasting System Charles Hallahan, Economic Research Service/USDA, Washington, DC

Using JMP Visualizations to Build a Statistical Model George J. Hurley, The Hershey Company, Hershey, PA

Zero-Inflated Poisson Regression

Data Management - 50%

One Factor Experiments

book 2014/5/6 15:21 page v #3 List of figures List of tables Preface to the second edition Preface to the first edition

AN OVERVIEW AND EXPLORATION OF JMP A DATA DISCOVERY SYSTEM IN DAIRY SCIENCE

ESTIMATING DENSITY DEPENDENCE, PROCESS NOISE, AND OBSERVATION ERROR

Transcription:

Paper SA5 Information Criteria Methods in SAS for Multiple Linear Regression Models Dennis J. Beal, Science Applications International Corporation, Oak Ridge, TN ABSTRACT SAS 9.1 calculates Akaike s Information Criteria (AIC), Sawa s Bayesian Information Criteria (BIC), and Schwarz Bayesian Criteria (SBC) for every possible 2 p 1 models for p independent variables. AIC, BIC, and SBC estimate a measure of the difference between a given model and the true underlying model. The model with the smallest AIC, BIC, or SBC among all competing models is deemed the best model. This paper provides the SAS code that can be used to simultaneously evaluate up to 23 models to determine the best subset of variables that minimizes the information criteria among all possible subsets. Simulated multivariate data are used to compare the performance of AIC, BIC, and SBC with model diagnostics root mean square error (RMSE), Mallows Cp, and adjusted R 2, and the three heuristic methods forward selection, backward elimination, and stepwise regression. This paper is for intermediate SAS users of SAS/STAT who understand multivariate data analysis. Key words: Akaike s Information Criteria, Sawa s Bayesian Information Criteria, Schwarz Bayesian Criteria, multiple linear regression, model selection INTRODUCTION Multiple linear regression is one of the statistical tools used for discovering relationships between variables. It is used to find the linear model that best predicts the dependent variable from the independent variables. A data set with p independent variables has 2 p 1 possible subset models to consider since each of the p variables is either included or excluded from the model, not counting interaction terms or the intercept. Diagnostics are calculated for each model to help determine which model is best. These model diagnostics include the root mean square error (RMSE), the adjusted coefficient of determination (R 2 ), and Mallow s Cp. A good linear model will have small RMSE and Cp and a high adjusted R 2 close to 1. The three common heuristic methods include forward selection, backward selection, and stepwise regression. However, these model diagnostics and heuristic methods alone are insufficient to determine the best model. This paper will compare these model diagnostic and heuristic techniques to minimizing the information criteria statistics Akaike s Information Criteria (AIC), Sawa s Bayesian Information Criteria (BIC), and Schwarz Bayesian Criteria (SBC) on simulated data from several distributions. The SAS code for determining the best linear model will be shown. The SAS code presented in this paper uses the SAS System for personal computers version 9.1.3 running on a Windows XP Professional platform. MODEL DIAGNOSTICS Three model diagnostic statistical techniques often taught in statistics courses to determine the best linear model include minimizing the RMSE, maximizing the adjusted R 2, and minimizing Mallow s Cp. ROOT MEAN SQUARE ERROR The RMSE is a function of the sum of squared errors (SSE), number of observations n, and the number of independent variables k p + 1 where k includes the intercept and is shown in Eqn. (1). SSE RMSE = (1) n k The RMSE is calculated for all possible subset models. Using this technique, the model with the smallest RMSE is declared the best linear model. This approach does include the number of parameters in the model. Additional parameters will decrease the numerator since the SSE decreases as additional variables are included in the model and the denominator decreases as k increases. 1

ADJUSTED R 2 The adjusted coefficient of determination R 2 is the percentage of the variability of the dependent variable that is explained by the variation of the independent variables after accounting for the intercept and number of independent variables k p. Therefore, the adjusted R 2 value ranges from to 1 and is a function of the total sum of squares (SST), the SSE, number of observations n, number of independent variables k p + 1 where k may include the intercept, and the binomial variable i (i = 1 if the intercept is included in the model, i = otherwise). The equation for the adjusted R 2 is shown in Eqn. (2). 2 ( n i) SSE adjr = 1 (2) ( n k) SST The adjusted R 2 is calculated for all possible subset models. Using this technique, the model with the largest adjusted R 2 is declared the best linear model. This approach also includes the number of variables k in the model; thus, additional parameters will decrease both the numerator and denominator. However, several models often will have an adjusted R 2 = 1, so determining the best model among tied values is problematic. MALLOWS CP Mallows Cp (Mallows 1973) is another model diagnostic that is a function of the SSE, the full model pure error estimate σ 2, number of observations n, and the number of independent variables k p + 1 where k includes the intercept. Mallows Cp is shown in Eqn. (3). SSE Cp = + 2k n 2 σ (3) Mallows Cp is calculated for all possible subset models. Using this technique, the model with the smallest Cp is declared the best linear model. As the number of independent variables k increases, an increased penalty term (2k) is offset with a decreased SSE. HEURISTIC STATISTICAL TECHNIQUES Three heuristic statistical techniques often taught in statistics courses to determine the best linear model include forward selection, backward elimination, and stepwise regression. FORWARD SELECTION Forward selection begins with only the intercept term in the model. For each of the independent variables, the F statistic is calculated to determine each variable s contribution to the model. The variable with the smallest p-value below a specified α cutoff value (e.g.,.15) indicating statistical significance is kept in the model. The model is rerun keeping this variable and recalculating F statistics on the remaining p - 1 independent variables. This process continues until no remaining variables have F statistic p-values below the specified α. Once a variable is in the model, it remains in the model. BACKWARD ELIMINATION Backward elimination begins by including all variables in the model and calculating F statistics for each variable. The variable with the largest p-value exceeding the specified α cutoff value is then removed from the model. This process continues until no remaining variables have F statistic p-values above the specified α. Once a variable is removed from the model, it cannot be added back into the model. STEPWISE REGRESSION Stepwise regression is a modification of the forward selection technique in that variables already in the model do not necessarily stay in. As in the forward selection technique, variables are added one at a time to the model, as long as the F statistic p-value is below the specified α. After a variable is added, however, the stepwise technique evaluates all of the variables already included in the model and removes any variable that has an insignificant F statistic p-value exceeding the specified α. Only after this check is made and the identified variables have been removed can another variable be added to the model. The stepwise process ends when none of the variables excluded from the model 2

has an F statistic significant at the specified α and every variable included in the model is significant at the specified α. INFORMATION CRITERIA Information criteria are measures of goodness of fit or uncertainty for the range of values of the data. In the context of multiple linear regression, information criteria measure the difference between a given model and the true underlying model. AKAIKE S INFORMATION CRITERIA Akaike (1973) introduced the concept of information criteria as a tool for optimal model selection. Other authors who use AIC for model selection include Akaike (1987) and Bozdogan (1987, ). Akaike s Information Criteria (AIC) is a function of the number of observations n, the SSE, and the number of independent variables k p + 1 where k includes the intercept, as shown in Eqn. (4). SSE AIC = n ln + 2k (4) n The first term in Eqn. (4) is a measure of the model lack of fit while the second term (2k) is a penalty term for additional parameters in the model. Therefore, as the number of independent variables k included in the model increases, the lack of fit term decreases while the penalty term increases. Conversely, as variables are dropped from the model the lack of fit term increases while the penalty term decreases. The model with the smallest AIC is deemed the best model since it minimizes the difference from the given model to the true model. BAYESIAN INFORMATION CRITERIA Sawa (1978) developed a model selection criterion that was derived from a Bayesian modification of the AIC criterion. Bayesian Information Criteria (BIC) is a function of the number of observations n, the SSE, the pure error variance fitting the full model (σ 2 ), and the number of independent variables k p + 1 where k includes the intercept, as shown in Eqn. (5). 2 2 4 SSE 2( k + 2) nσ 2n σ BIC = n ln + (5) 2 n SSE SSE The penalty term for BIC is more complex than the AIC penalty term and is a function of n, the SSE and σ 2 in addition to k. SCHWARZ BAYESIAN CRITERIA Schwarz (1978) developed a model selection criterion that was derived from a Bayesian modification of the AIC criterion. Schwarz Bayesian Criteria (SBC) is a function of the number of observations n, the SSE, and the number of independent variables k p + 1 where k includes the intercept, as shown in Eqn. (6). SSE SBC = n ln + k ln n (6) n The penalty term for SBC is similar to AIC in Eqn. (4), but uses a multiplier of ln n for k instead of a constant 2 by incorporating the sample size n. SIMULATED DATA A multivariate data set with independent variables and one dependent variable was simulated from known true models that are linear functions of a subset of the independent variables. The following SAS code simulates observations for these independent X variables. The independent X variables come from normal, lognormal, exponential, and uniform distributions with various means and variances. Variables X5, X6, X9, and X are correlated with other variables. 3

data a; do i = 1 to ; x1 = + 5*rannor(); * Normal(, 25); x2 = exp(3*rannor()); * Lognormal; x3 = 5 + *ranuni(); * Uniform; x4 = + *rannor(); * Normal(, 2); x5 = x1 + x4 + rannor(); * Normal bimodal; x6 = 5 + 2*x2 + 3*ranexp(); * Lognormal and exponential mixture; x7 =.5*exp(4*rannor()); * Lognormal; x8 = + 8*ranuni(); * Uniform; x9 = x2 + x8 + 2*rannor(); * Lognormal, uniform and normal mix; x = + x7 + 9*rannor(); * Lognormal and normal mix; output; end; drop i; run; The dependent variable Y was calculated using the simulated data for all 2 1 = 23 possible subset true models with an intercept and another 23 subset models without an intercept. The coefficient terms for the true models were randomly assigned for all variables for each true model. A pure random error term ε = 2*rannor() was added to each simulated Y observation. AIC, BIC, SBC, the three heuristic statistical techniques, and the three model diagnostics discussed earlier were calculated for every possible model. The best model determined by each method was compared with the true simulated model. SAS CODE FOR INFORMATION CRITERIA SAS calculates the AIC, BIC, and SBC for every possible subset of variables for models with up to independent variables. The following SAS code from SAS/STAT computes AIC, BIC, and SBC for all possible subsets of multiple regression models for main effects. Note that x1-x in the model statement indicates that all variables x1, x2, x3, x4, x5, x6, x7, x8, x9, and x are included. The selection=adjrsq option specifies the adjusted R 2 method will be used to select the model so that all possible 23 models are considered, although other selection options may also be used such as selection=rsquare. The SSE option displays the sum of squared errors for each model, while the AIC, BIC, and SBC options display the AIC, BIC, and SBC statistics for each model, respectively. The adjrsq, cp, and rmse options display the adjusted R 2, Mallows Cp, and the RMSE statistics for each model, respectively. The first PROC REG calculates AIC, BIC, SBC, adjusted R 2, Mallows Cp, and the RMSE for all possible subsets of main effects using an intercept term. The second PROC REG calculates AIC, BIC, SBC, adjusted R 2, Mallows Cp, and the RMSE for all possible subsets of main effects without an intercept term by specifying the noint option. The output data sets est and est are combined, sorted, and printed from smallest AIC, BIC, and SBC to largest. The model with the smallest AIC, BIC, or SBC value is deemed the best model. proc reg data=a outest=est; model y = x1-x / selection=adjrsq sse aic bic sbc adjrsq cp rmse; run; quit; proc reg data=a outest=est; model y = x1-x / noint selection=adjrsq sse aic bic sbc adjrsq cp rmse; run; quit; data estout; set est est; run; proc sort data=estout; by _aic_; proc print data=estout(obs=8); title best model by AIC ; run; proc sort data=estout; by _bic_; proc print data=estout(obs=8); title best model by BIC ; run; proc sort data=estout; by _sbc_; proc print data=estout(obs=8); title best model by SBC ; run; 4

SAS CODE FOR MODEL DIAGNOSTICS The following SAS code determines the best models for the model diagnostics adjusted R 2, Mallows Cp, and RMSE. proc sort data=estout; by _rmse_; proc print data=estout(obs=8); title best model by RMSE ; run; proc sort data=estout; by _cp_; proc print data=estout(obs=8); title best model by CP ; run; proc sort data=estout; by descending _adjrsq_; ** want largest adjusted R2; proc print data=estout(obs=8); title best model by adjusted R2 ; run; SAS CODE FOR HEURISTIC METHODS Independent variables X1 through X were regressed against the dependent variable Y using forward selection, backward selection, and stepwise regression with an assumed entry and exit significance level α =.15. An entry significance level α =.15, specified in the slentry=.15 option, means a variable must have a p-value.15 to enter the model during forward selection and stepwise regression. An exit significance level of.15, specified in the slstay=.15 option, means a variable must have a p-value.15 to leave the model during backward selection and stepwise regression. The following SAS code performs the forward selection method by specifying the option selection=forward. The model diagnostics are output into the data set est1. proc reg data=a outest=est1; model y=x1-x / slentry=.15 selection=forward sse aic bic sbc; run; quit; The following SAS code performs the backward elimination method by specifying the option selection=backward. The model diagnostics are output into the data set est2. proc reg data=a outest=est2; model y=x1-x / slstay=.15 selection=backward sse aic bic sbc; run; quit; The following SAS code performs stepwise regression by specifying the option selection=stepwise. The model diagnostics are output into the data set est3. proc reg data=a outest=est3; model y=x1-x / slstay=.15 slentry=.15 selection=stepwise sse aic bic sbc; run; quit; COMPARISON OF METHODS SAMPLE SIZE n = Fig. 1 shows the frequency each method group correctly selected the true models from all 46 possible models (including the intercept term) using a sample size n =. In general, the information criteria methods (AIC, BIC and SBC) performed best by correctly selecting 63% of the true models. The heuristic methods (stepwise, backward, and forward selection) correctly selected 43% of the true models. The model diagnostics (adjusted R 2, Cp, and RMSE) performed the worst by correctly selecting only 28% of the true models. Fig. 1 also shows the percent each method correctly selected the true models. Clearly, the information criterion SBC easily outperformed the other methods by correctly selecting 96% of the true models. Information criteria AIC and BIC correctly selected 45% and 46% of the true models, respectively. Across all methods stepwise regression was a distant second to SBC by correctly selecting 54% of the true models. Backward selection correctly selected 46% of the true models, while forward selection correctly selected only 29% of the true models. For the model diagnostics, Mallows Cp correctly selected 47% of the true models. However, the adjusted R 2 and RMSE methods performed the worst by selecting only % and 18% of the true models, respectively. 5

9 8 43 63 3 28 Heuristic Methods Information Criteria Model Diagnostics 96 9 8 54 46 45 46 47 3 29 18 Backward Forward Stepwise AIC BIC SBC ADJRSQ CP RMSE Heuristic Methods Information Criteria Model Diagnostics Fig. 1. Histograms of percent each method correctly selected the true model of all possible models (n = ) 6

9 8 3 Backward Forward Stepwise AIC BIC SBC ADJRSQ CP RMSE 1 2 3 4 5 6 7 8 9 Number of independent variables in the true model Fig. 2. Line plot of percent of models correctly selected for each method by the number of independent variables in the true model (n = ) Fig. 2 examines further the results from Fig. 1 by plotting the percent each method correctly selected the true models by the number of independent variables in the true model. The success rate for SBC consistently exceeded 9% regardless of the number of variables in the true model. All methods generally improved as the number of independent variables increased except for forward selection. Forward selection performed better when either one or two independent variables or at least nine independent variables were in the true model. Adjusted R 2 and RMSE performed the worst of all the methods, particularly for up to seven independent variables in the true model. Given these results for a large sample size of n =, a natural extension to consider is whether these results still hold for a smaller sample size. The first observations were selected from the simulated data set of observations. The SAS code was rerun using this smaller data set of n = observations so the performance of these methods for selecting the true models can be compared with the full data set. SAMPLE SIZE n = The frequency each method group correctly selected the true models from all 46 possible models using a sample size n = is shown in Fig. 3. In general, the information criteria methods again performed best by correctly selecting 59% of the true models. The heuristic methods (stepwise, backward, and forward selection) correctly selected 41% of the true models. The model diagnostics (adjusted R 2, Cp, and RMSE) performed the worst by correctly selecting only 26% of the true models. These results agree well with Fig. 1. Fig. 3 also shows the percent each method correctly selected the true models for n =. The information criterion SBC again easily outperformed the other methods by correctly selecting 83% of the true models. Information criteria AIC and BIC correctly selected 42% and 52% of the true models, respectively. Across all methods stepwise regression was a distant second to SBC by correctly selecting 53% of the true models. Backward selection correctly selected 45% of the true models, while forward selection correctly selected only 26% of the true models. For the model diagnostics, Mallows Cp correctly selected 43% of the true models. However, the adjusted R 2 and RMSE methods performed the worst by selecting only 17% and 18% of the true models, respectively. Note with a smaller sample size, less information from the data is available to correctly select the true models. Thus, the percentages in Fig. 3 are typically smaller than in Fig. 1, but still consistent. 7

9 8 59 41 3 26 Heuristic Methods Information Criteria Model Diagnostics 9 8 83 45 53 42 52 43 3 26 17 18 Backward Forward Stepwise AIC BIC SBC ADJRSQ CP RMSE Heuristic Methods Information Criteria Model Diagnostics Fig. 3. Histograms of percent each method correctly selected the true model of all possible 46 models (n = ) 8

9 8 3 Backward Forward Stepwise AIC BIC SBC ADJRSQ CP RMSE 1 2 3 4 5 6 7 8 9 Number of independent variables in the true model Fig. 4. Line plot of percent of models correctly selected for each method by the number of independent variables in the true model (n = ) Fig. 4 examines further the results from Fig. 3 by plotting the percent each method correctly selected the true models by the number of independent variables in the true model. The success rate for SBC consistently exceeded 65% regardless of the number of variables in the true model. All methods generally improved as the number of independent variables increased except for forward selection. Forward selection performed better when either one or two independent variables or at least nine independent variables were in the true model. Adjusted R 2 and RMSE again performed the worst of all the methods, particularly for up to seven independent variables in the true model. CONCLUSION SAS is a powerful tool that utilizes AIC, BIC, and SBC to simultaneously evaluate all possible subsets of multiple regression models to determine the best model for up to independent variables. Using information criteria for multivariate model selection has been shown to be superior to heuristic methods (forward selection, backward elimination, stepwise regression) and model diagnostics (adjusted R 2, Mallows Cp and RMSE) using simulated data with a known underlying model. Among the three information criteria methods studied, SBC performed best by correctly selecting the true model the most consistently after enumerating all possible true models from the simulated data. SBC consistently performed best for both sample sizes n = and n =. REFERENCES Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In B.N. Petrov and F. Csaki (Eds.), Second international symposium on information theory, 267-281. Budapest: Academiai Kiado. Akaike, H. (1987). Factor analysis and AIC. Psychometrika, 52, 317-332. Beal, D J. (5). SAS code to select the best multiple linear regression model for multivariate data using information criteria, Proceedings of the 13 th Annual Conference of the SouthEast SAS Users Group. Bozdogan, H. (1987). Model selection and Akaike s information criterion (AIC): the general theory and its analytical extensions, Psychometrika, 52, No. 3, 345-3. 9

Bozdogan, H. (). Akaike s information criterion and recent developments in informational complexity. Journal of Mathematical Psychology, 44, 62-91. Mallows, C. L. (1973). Some comments on Cp. Technometrics, 15, 661-675. Sawa, T. (1978). Information criteria for discriminating among alternative regression models. Econometrica, 46, 1273-1282. Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6, 461-464. CONTACT INFORMATION The author welcomes and encourages any questions, corrections, feedback, and remarks. Contact the author at: Dennis J. Beal, Ph.D. candidate Senior Statistician Science Applications International Corporation P.O. Box 21 151 Lafayette Drive Oak Ridge, Tennessee 37831 phone: 865-481-8736 fax: 865-481-4757 e-mail: dennis.j.beal@saic.com SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. indicates USA registration. Other brand and product names are registered trademarks or trademarks of their respective companies.