More data analysis examples

Similar documents
THIS IS NOT REPRESNTATIVE OF CURRENT CLASS MATERIAL. STOR 455 Midterm 1 September 28, 2010

Your Name: Section: INTRODUCTION TO STATISTICAL REASONING Computer Lab #4 Scatterplots and Regression

Further Maths Notes. Common Mistakes. Read the bold words in the exam! Always check data entry. Write equations in terms of variables

Statistics Lab #7 ANOVA Part 2 & ANCOVA

Why Use Graphs? Test Grade. Time Sleeping (Hrs) Time Sleeping (Hrs) Test Grade

EXST 7014, Lab 1: Review of R Programming Basics and Simple Linear Regression

Section 2.3: Simple Linear Regression: Predictions and Inference

Graphical Analysis of Data using Microsoft Excel [2016 Version]

Section 2.1: Intro to Simple Linear Regression & Least Squares

Centering and Interactions: The Training Data

Chapter 3 Analyzing Normal Quantitative Data

Understanding and Comparing Distributions. Chapter 4

THE UNIVERSITY OF BRITISH COLUMBIA FORESTRY 430 and 533. Time: 50 minutes 40 Marks FRST Marks FRST 533 (extra questions)

Stat 5100 Handout #6 SAS: Linear Regression Remedial Measures

Section 2.1: Intro to Simple Linear Regression & Least Squares

An introduction to plotting data

Things to Know for the Algebra I Regents

Using Excel for Graphical Analysis of Data

Descriptive Statistics, Standard Deviation and Standard Error

( ) = Y ˆ. Calibration Definition A model is calibrated if its predictions are right on average: ave(response Predicted value) = Predicted value.

A Knitr Demo. Charles J. Geyer. February 8, 2017

Getting started with simulating data in R: some helpful functions and how to use them Ariel Muldoon August 28, 2018

Section 3.2: Multiple Linear Regression II. Jared S. Murray The University of Texas at Austin McCombs School of Business

Model Selection and Inference

Section 3.4: Diagnostics and Transformations. Jared S. Murray The University of Texas at Austin McCombs School of Business

Regression on the trees data with R

Assignment 5.5. Nothing here to hand in

SLStats.notebook. January 12, Statistics:

Getting Correct Results from PROC REG

STAT 2607 REVIEW PROBLEMS Word problems must be answered in words of the problem.

610 R12 Prof Colleen F. Moore Analysis of variance for Unbalanced Between Groups designs in R For Psychology 610 University of Wisconsin--Madison

22s:152 Applied Linear Regression

Year 10 General Mathematics Unit 2

Section 4.1: Time Series I. Jared S. Murray The University of Texas at Austin McCombs School of Business

Diode Lab vs Lab 0. You looked at the residuals of the fit, and they probably looked like random noise.

STA 570 Spring Lecture 5 Tuesday, Feb 1

Lab 07: Multiple Linear Regression: Variable Selection

Regression Analysis and Linear Regression Models

Using Excel for Graphical Analysis of Data

HW 10 STAT 672, Summer 2018

Salary 9 mo : 9 month salary for faculty member for 2004

Applied Statistics and Econometrics Lecture 6

Practice in R. 1 Sivan s practice. 2 Hetroskadasticity. January 28, (pdf version)

Outline. Topic 16 - Other Remedies. Ridge Regression. Ridge Regression. Ridge Regression. Robust Regression. Regression Trees. Piecewise Linear Model

Chapter 3 - Displaying and Summarizing Quantitative Data

Quantitative Understanding in Biology Module II: Model Parameter Estimation Lecture IV: Quantitative Comparison of Models

Analysis of variance - ANOVA

36-402/608 HW #1 Solutions 1/21/2010

The first thing we ll need is some numbers. I m going to use the set of times and drug concentration levels in a patient s bloodstream given below.

Chapter 6: DESCRIPTIVE STATISTICS

IQR = number. summary: largest. = 2. Upper half: Q3 =

PR3 & PR4 CBR Activities Using EasyData for CBL/CBR Apps

Factorial ANOVA with SAS

Bivariate (Simple) Regression Analysis

Making Science Graphs and Interpreting Data

STA Module 2B Organizing Data and Comparing Distributions (Part II)

STA Learning Objectives. Learning Objectives (cont.) Module 2B Organizing Data and Comparing Distributions (Part II)

Intro. Scheme Basics. scm> 5 5. scm>

Introduction to Geospatial Analysis

STAT:5201 Applied Statistic II

HW 10 STAT 472, Spring 2018

Here is Kellogg s custom menu for their core statistics class, which can be loaded by typing the do statement shown in the command window at the very

Multiple Regression White paper

22s:152 Applied Linear Regression

Chapter 7: Linear regression

2.1 Transforming Linear Functions

Statistics can best be defined as a collection and analysis of numerical information.

STA Rev. F Learning Objectives. Learning Objectives (Cont.) Module 3 Descriptive Measures

CH5: CORR & SIMPLE LINEAR REFRESSION =======================================

SYS 6021 Linear Statistical Models

ST Lab 1 - The basics of SAS

Orange Juice data. Emanuele Taufer. 4/12/2018 Orange Juice data (1)

Here is the data collected.

A Multiple-Line Fitting Algorithm Without Initialization Yan Guo

Multiple Linear Regression: Global tests and Multiple Testing

Exercise 2.23 Villanova MAT 8406 September 7, 2015

3.7. Vertex and tangent

A Step by Step Guide to Learning SAS

Maths Revision Worksheet: Algebra I Week 1 Revision 5 Problems per night

9.1 Random coefficients models Constructed data Consumer preference mapping of carrots... 10

Advanced Curve Fitting. Eric Haller, Secondary Occasional Teacher, Peel District School Board

GSE Algebra 1 Name Date Block. Unit 3b Remediation Ticket

Lecture 24: Generalized Additive Models Stat 704: Data Analysis I, Fall 2010

Stat 5303 (Oehlert): Unreplicated 2-Series Factorials 1

Section 4.4: Parabolas

Ex.1 constructing tables. a) find the joint relative frequency of males who have a bachelors degree.

Functions and Graphs. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.

Introduction to Statistical Analyses in SAS

/4 Directions: Graph the functions, then answer the following question.

More Ways to Solve & Graph Quadratics The Square Root Property If x 2 = a and a R, then x = ± a

BIO 360: Vertebrate Physiology Lab 9: Graphing in Excel. Lab 9: Graphing: how, why, when, and what does it mean? Due 3/26

CPSC 340: Machine Learning and Data Mining. More Regularization Fall 2017

Section 2.2: Covariance, Correlation, and Least Squares

Vocabulary Unit 2-3: Linear Functions & Healthy Lifestyles. Scale model a three dimensional model that is similar to a three dimensional object.

Vocabulary. 5-number summary Rule. Area principle. Bar chart. Boxplot. Categorical data condition. Categorical variable.

Survey of Math: Excel Spreadsheet Guide (for Excel 2016) Page 1 of 9

Chapter 4: Analyzing Bivariate Data with Fathom

Chapter 5. Understanding and Comparing Distributions. Copyright 2010, 2007, 2004 Pearson Education, Inc.

Data Analysis Multiple Regression

Multiple Linear Regression

Transcription:

More data analysis examples

R packages used library(ggplot2) library(tidyr) library(mass) library(leaps) library(dplyr) ## ## Attaching package: dplyr ## The following object is masked from package:mass : ## ## select ## The following objects are masked from package:stats : ## ## filter, lag ## The following objects are masked from package:base : ## 2 / 101

The windmill data Engineer: does amount of electricity generated by windmill depend on how strongly wind blowing? Measurements of wind speed and DC current generated at various times. Assume the various times to be randomly selected aim to generalize to this windmill at all times. Research questions: Relationship between wind speed and current generated? If so, what kind of relationship? Can we model relationship to do predictions? Do analysis in R. 3 / 101

The data Some of it: windmill=read.csv("windmill.csv",header=t) head(windmill,n=10) ## wind.velocity DC.output ## 1 5.00 1.582 ## 2 6.00 1.822 ## 3 3.40 1.057 ## 4 2.70 0.500 ## 5 10.00 2.236 ## 6 9.70 2.386 ## 7 9.55 2.294 ## 8 3.05 0.558 ## 9 8.15 2.166 ## 10 6.20 1.866 4 / 101

Strategy Two quantitative variables, looking for relationship: regression methods. Start with picture (scatterplot). Fit models and do model checking, fixing up things as necessary. Scatterplot: 2 variables, DC.output and wind.velocity. First is output/response, other is input/explanatory. Put DC.output on vertical scale. Add trend, but don t want to assume linear. g=ggplot(windmill,aes(y=dc.output,x=wind.velocity))+ geom_point()+geom_smooth(se=f) 5 / 101

Scatterplot g 2.0 DC.output 1.5 1.0 0.5 4 6 8 10 wind.velocity 6 / 101

Comments Definitely a relationship: as wind velocity increases, so does DC output. (As you d expect.) Is relationship linear? To help judge, lowess curve smooths scatterplot trend. ( Locally weighted least squares which downweights outliers. Not constrained to be straight.) Trend more or less linear for while, then curves downwards. Straight line not so good here. 7 / 101

Fitting a straight line Let s try fitting a straight line anyway, and see what happens: DC.1=lm(DC.output~wind.velocity,data=windmill) summary(dc.1) ## ## Call: ## lm(formula = DC.output ~ wind.velocity, data = windmill) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.59869-0.14099 0.06059 0.17262 0.32184 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) 0.13088 0.12599 1.039 0.31 ## wind.velocity 0.24115 0.01905 12.659 7.55e-12 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.2361 on 23 degrees of freedom ## Multiple R-squared: 0.8745,Adjusted R-squared: 0.869 8 / 101

Comments Strategy: lm actually fits the regression. Store results in a variable. Then look at the results, eg. via summary. Results actually pretty good: wind.velocity strongly significant, R-squared (87%) high. How to check whether regression is appropriate? Look at the residuals, observed minus predicted. Can plot the regression object; get 4 plots, at least 2 of which are mysterious. Can obtain residuals and predicted values directly and plot them. We go second way: windmill$fit.linear=fitted(dc.1) windmill$res.linear=resid(dc.1) g=ggplot(windmill,aes(y=res.linear,x=fit.linear))+ geom_point()+geom_smooth(se=f) 9 / 101

Plot of residuals against fitted values g 0.2 0.0 res.linear 0.2 0.4 0.6 1.0 1.5 2.0 2.5 fit.linear 10 / 101

Comments on residual plot Residual plot should be a random scatter of points. Should be no pattern left over after fitting the regression. lowess curve should be more or less straight across at 0. Here, have a curved trend on residual plot. This means original relationship must have been a curve (as we saw on original scatterplot). Possible ways to fit a curve: Add a squared term in explanatory variable. Transform response variable (doesn t work well here). See what science tells you about mathematical form of relationship, and try to apply. 11 / 101

Parabolas and fitting parabola model A parabola has equation y = ax 2 + bx + c for suitable a, b, c. About the simplest function that is not a straight line. Fit one using lm by adding x 2 to right side of model formula with +: DC.2=lm(DC.output~wind.velocity+I(wind.velocity^2), data=windmill) The I() necessary because ^ in model formula otherwise means something different (to do with interactions in ANOVA). This actually multiple regression. Call it parabola model. 12 / 101

Parabola model output summary(dc.2) ## ## Call: ## lm(formula = DC.output ~ wind.velocity + I(wind.velocity^2), ## data = windmill) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.26347-0.02537 0.01264 0.03908 0.19903 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) -1.155898 0.174650-6.618 1.18e-06 *** ## wind.velocity 0.722936 0.061425 11.769 5.77e-11 *** ## I(wind.velocity^2) -0.038121 0.004797-7.947 6.59e-08 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.1227 on 22 degrees of freedom ## Multiple R-squared: 0.9676,Adjusted R-squared: 0.9646 ## F-statistic: 328.3 on 2 and 22 DF, p-value: < 2.2e-16 13 / 101

Comments on output R-squared has gone up a lot, from 87% (line) to 97% (parabola). Coefficient of squared term strongly significant (P-value 6.59 10 8 ). Adding squared term has definitely improved fit of model. Parabola model better than linear one. But... need to check residuals again: windmill$fit.parabola=fitted(dc.2) windmill$res.parabola=resid(dc.2) g=ggplot(windmill,aes(y=res.parabola,x=fit.parabola))+ geom_point() 14 / 101

Residual plot from parabola model g 0.2 res.parabola 0.1 0.0 0.1 0.2 0.5 1.0 1.5 2.0 fit.parabola 15 / 101

Or, with lowess curve g+geom_smooth(se=f) 0.2 res.parabola 0.1 0.0 0.1 0.2 0.5 1.0 1.5 2.0 fit.parabola 16 / 101

Scatterplot with fitted line and curve Residual plot basically random. Good. Lowess curve not flat, but might be deceived by few residuals above 0 in middle. Scatterplot with fitted line and curve like this: g=ggplot(windmill,aes(y=dc.output,x=wind.velocity))+ geom_point()+geom_smooth(method="lm",se=f)+ geom_line(aes(y=fit.parabola)) This plots: scatterplot (geom_point); straight line (via tweak to geom_smooth, which draws best-fitting line); fitted curve, using the predicted DC.output values, joined by lines (with points not shown). Trick in the geom_line is use the predictions as the y-points to join by lines, instead of the original data points. Without the aes in the geom_line, original data points would be joined by lines. 17 / 101

Scatterplot with fitted line and curve g 2 DC.output 1 0 4 6 8 10 wind.velocity Curve clearly fits better than line. 18 / 101

Another approach to a curve There is a problem with parabolas, which we ll see later. Go back to engineer with findings so far. Ask, what should happen as wind velocity increases? : Upper limit on electricity generated, but otherwise, the larger the wind velocity, the more electricity generated. Mathematically, sounds like asymptote. Straight lines and parabolas don t have them, but eg. y = 1/x does: as x gets bigger, y approaches zero without reaching it. What happens to y = a + b(1/x) as x gets large? y gets closer and closer to a a is asymptote. Fit this, call it asymptote model. Alternative, y = a + be x, approaches asymptote faster. 19 / 101

How to fit asymptote model? Same idea as for parabola: define new explanatory variable to be 1/x, and predict y from it. x is velocity, distance over time. So 1/x is time over distance. In walking world, if you walk 5 km/h, take 12 minutes to walk 1 km, called your pace. So call 1 over wind.velocity wind.pace. Make a scatterplot first to check for straightness (next page) windmill$wind.pace=1/windmill$wind.velocity g=ggplot(windmill,aes(y=dc.output,x=wind.pace))+ geom_point()+geom_smooth(se=f) and run regression like this (page after): DC.3=lm(DC.output~wind.pace,data=windmill) 20 / 101

Scatterplot for wind.pace g 2.0 DC.output 1.5 1.0 0.5 0.1 0.2 0.3 0.4 wind.pace That s pretty straight. 21 / 101

Regression output ## ## Call: ## lm(formula = DC.output ~ wind.pace, data = windmill) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.20547-0.04940 0.01100 0.08352 0.12204 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) 2.9789 0.0449 66.34 <2e-16 *** ## wind.pace -6.9345 0.2064-33.59 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.09417 on 23 degrees of freedom ## Multiple R-squared: 0.98,Adjusted R-squared: 0.9792 ## F-statistic: 1128 on 1 and 23 DF, p-value: < 2.2e-16 22 / 101

Comments R-squared, 98%, even higher than for parabola model (97%). Simpler model, only one explanatory variable (wind.pace) vs. 2 for parabola model (wind.velocity and vel2). wind.pace (unsurprisingly) strongly significant. Looks good, but check residual plot: windmill$fit.asymptote=fitted(dc.3) windmill$res.asymptote=resid(dc.3) g=ggplot(windmill,aes(y=res.asymptote,x=fit.asymptote))+ geom_point()+geom_smooth(se=f) 23 / 101

Residual plot for asymptote model g 0.1 res.asymptote 0.0 0.1 0.2 0.5 1.0 1.5 2.0 fit.asymptote 24 / 101

A diversion: gather Sometimes, things are in different columns in a data frame, when we want them in the same column, eg.: group.1=c(10,13,13,14) group.2=c(12,15,15,19) d=data.frame(group.1,group.2) ; d ## group.1 group.2 ## 1 10 12 ## 2 13 15 ## 3 13 15 ## 4 14 19 These are made-up scores for 8 different people in 2 different groups. Want one column of scores, and another column labelling groups (eg. for drawing boxplots). 25 / 101

Making gather work Use gather from package tidyr: long=gather(d,group,score) This makes one long column of scores, each one identified by group it came from (right): long ## group score ## 1 group.1 10 ## 2 group.1 13 ## 3 group.1 13 ## 4 group.1 14 ## 5 group.2 12 ## 6 group.2 15 ## 7 group.2 15 ## 8 group.2 19 Then eg: g=ggplot(long,aes(x=group,y=score))+geom_boxplot() 26 / 101

The boxplot g 17.5 score 15.0 12.5 10.0 group.1 group group.2 27 / 101

Plotting trends on scatterplot Residual plot not bad. But residuals go up to 0.10 and down to 0.20, suggesting possible skewness (not normal). I think it s not perfect, but OK overall. Next: plot scatterplot with all three fitted lines/curves on it (for comparison), with legend saying which is which. Trickery: need all fits in one column, with a second one labelling model they were fits for: gather: w2=gather(windmill,model,fit, c(fit.linear,fit.parabola,fit.asymptote)) g=ggplot(w2,aes(y=dc.output,x=wind.velocity, colour=model))+geom_point()+geom_line(aes(y=fit)) 28 / 101

What s in w2 str(w2) ## 'data.frame': 75 obs. of 8 variables: ## $ wind.velocity: num 5 6 3.4 2.7 10 9.7 9.55 3.05 8.15 6.2... ## $ DC.output : num 1.58 1.82 1.06 0.5 2.24... ## $ res.linear : num 0.245 0.244 0.106-0.282-0.306... ## $ res.parabola : num 0.0762 0.0126 0.1956-0.0181-0.0254... ## $ wind.pace : num 0.2 0.167 0.294 0.37 0.1... ## $ res.asymptote: num -0.00995-0.0011 0.11771 0.08949-0.04941... ## $ model : chr "fit.linear" "fit.linear" "fit.linear" "fit.linea ## $ fit : num 1.337 1.578 0.951 0.782 2.542... 29 / 101

Scatterplot with fitted curves g 0 1 2 4 6 8 10 wind.velocity DC.output model fit.asymptote fit.linear fit.parabola 30 / 101

Comments Predictions from curves are very similar. Predictions from asymptote model as good, and from simpler model (one x not two), so prefer those. Go back to asymptote model summary: 31 / 101

Asymptote model summary summary(dc.3) ## ## Call: ## lm(formula = DC.output ~ wind.pace, data = windmill) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.20547-0.04940 0.01100 0.08352 0.12204 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) 2.9789 0.0449 66.34 <2e-16 *** ## wind.pace -6.9345 0.2064-33.59 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.09417 on 23 degrees of freedom ## Multiple R-squared: 0.98,Adjusted R-squared: 0.9792 ## F-statistic: 1128 on 1 and 23 DF, p-value: < 2.2e-16 32 / 101

Comments Intercept in this model about 3. Intercept of asymptote model is the asymptote (upper limit of DC.output). Not close to asymptote yet. Therefore, from this model, wind could get stronger and would generate appreciably more electricity. This is extrapolation! Would like more data from times when wind.velocity higher. Eyeballed 95% CI for asymptote: 3 ± 2(0.05) = (2.9, 3.1). Slope 7. Why negative? As wind.velocity increases, wind.pace goes down, and DC.output goes up. Check. 33 / 101

Checking back in with research questions Is there a relationship between wind speed and current generated? Yes. If so, what kind of relationship is it? One with an asymptote. Can we model the relationship, in such a way that we can do predictions? Yes, see model DC.3 and plot of fitted curve. Good. Job done. 34 / 101

Job done, kinda Just because the parabola model and asymptote model agree over the range of the data, doesn t necessarily mean they agree everywhere. Extend range of wind.velocity to 1 to 16 (steps of 0.5), and predict DC.output according to the two models: wv=seq(1,16,0.5) wv ## [1] 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 ## [15] 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 ## [29] 15.0 15.5 16.0 R has predict, which requires what to predict for, as data frame. The data frame has to contain values, with matching names, for all explanatory variables in regression(s). 35 / 101

Setting up data frame to predict from Linear model had just wind.velocity. Parabola model had that as well (squared one will be calculated) Asymptote model had just wind.pace (reciprocal of velocity). So create data frame called wv.new with those in: wv.new=data.frame(wind.velocity=wv, wind.pace=1/wv) 36 / 101

Doing predictions, one for each model Use same fit. names as before: fit.linear=predict(dc.1,wv.new) fit.parabola=predict(dc.2,wv.new) fit.asymptote=predict(dc.3,wv.new) Put it all into a data frame for plotting, along with original data: my.fits=data.frame(wv.new,fit.linear,fit.parabola,fit.asymptote) head(my.fits) ## wind.velocity wind.pace fit.linear fit.parabola fit.asymptote ## 1 1.0 1.0000000 0.3720240-0.4710832-3.9556872 ## 2 1.5 0.6666667 0.4925984-0.1572664-1.6441714 ## 3 2.0 0.5000000 0.6131729 0.1374900-0.4884135 ## 4 2.5 0.4000000 0.7337473 0.4131860 0.2050412 ## 5 3.0 0.3333333 0.8543217 0.6698215 0.6673444 ## 6 3.5 0.2857143 0.9748962 0.9073966 0.9975609 37 / 101

Making a plot To make a plot, we use the same trick as last time to get all three predictions on a plot with a legend: fits.long=gather(my.fits,model,fit, fit.linear:fit.asymptote) g=ggplot(fits.long,aes(y=fit,x=wind.velocity, colour=model))+geom_line() The observed wind velocities were in this range: vels=range(windmill$wind.velocity) ; vels ## [1] 2.45 10.20 DC.output between 0 and 3 from asymptote model. Add lines to graph: g2=g+geom_hline(yintercept=0)+geom_hline(yintercept=3)+ geom_vline(xintercept=vels[1])+ geom_vline(xintercept=vels[2]) 38 / 101

The plot g2 4 2 fit 0 model fit.asymptote fit.linear fit.parabola 2 4 4 8 12 16 wind.velocity 39 / 101

Comments (1) Over range of data, two models agree with each other well. Outside range of data, they disagree violently! For larger wind.velocity, asymptote model behaves reasonably, parabola model does not. What happens as wind.velocity goes to zero? Should find DC.output goes to zero as well. Does it? For parabola model: coef(dc.2) ## (Intercept) wind.velocity I(wind.velocity^2) ## -1.15589824 0.72293590-0.03812088 Nope, goes to 1.15 (intercept), actually significantly different from zero. 40 / 101

Comments (2) What about asymptote model? coef(dc.3) ## (Intercept) wind.pace ## 2.978860-6.934547 As wind.velocity heads to 0, wind.pace heads to +, so DC.output heads to! Predicted DC.output crosses 0 approx when (w is wind.pace and v wind.velocity) 3 7w = 0, ie. w = 3/7 and v = 7/3, and is negative below that nonsense! Also need more data for small wind.velocity to understand relationship. (Is there a lower asymptote?) Best we can do now is to predict DC.output to be zero for small wind.velocity. Assumes a threshold wind velocity below which no electricity 41 / 101

Summary Often, in data analysis, there is no completely satisfactory conclusion, as here. Have to settle for model that works OK, with restrictions. Always something else you can try. At some point you have to say I stop. 42 / 101

Electricity: peak hour demand and total energy usage

Another regression example (SAS) Electric utility company wants to relate peak-hour demand (kw) to total energy usage (kwh) during a month. Important planning problem, because generation system must be large enough to meet maximum demand. Data from 53 residential customers from August. Read in data and draw scatterplot: data util; infile '/folders/myfolders/utility.txt'; input usage demand; proc sgplot; scatter x=usage y=demand; 44 / 101

Scatterplot 45 / 101

Fitting a regression Concern: outlier top right (though appears to be legit values) Trend basically straight, and outlier appears to be on it. So try fitting regression: proc reg; model demand=usage; 46 / 101

Regression output The REG Procedure Model: MODEL1 Dependent Variable: demand Root MSE 1.57720 R-Square 0.7046 Dependent Mean 3.41321 Adj R-Sq 0.6988 Coeff Var 46.20882 Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > t Intercept 1-0.83130 0.44161-1.88 0.0655 usage 1 0.00368 0.00033390 11.03 <.0001 47 / 101

Comments R-squared 70%: not bad! Statistically significant slope: demand really does depend on usage. But should look at residuals. SAS way: obtain output data set, then do something with that: proc reg; model demand=usage; output out=util2 r=res p=fit; Output data set contains residuals, called res, fitted values, called fit. (Plus all same output as before.) Now make scatterplot of residuals against fitted values: proc sgplot; scatter x=fit y=res; 48 / 101

Residual plot 49 / 101

Comments No trend but: residuals for demand close to 0 are themselves close to zero and: residuals for larger demand tend to get farther from zero at least up to demand 5 or so. One of the assumptions hiding behind regression is that residuals should be of equal size, not fanning out as here. Remedy: transformation. 50 / 101

But what transformation? Best way: consult with person who brought you the data. Can t do that here! No idea what transformation would be good. Let data choose: Box-Cox transformation. Scale is that of ladder of powers : power transformation, but 0 is log. SAS: proc transreg: proc transreg data=util; model boxcox(demand)=identity(usage); 51 / 101

Output (graph) 52 / 101

Comments SAS finds best transformation, here power 0.50. Also gives you a CI for power, here 0.50 to 0.75. Ideal transformation should be defensible power. Here that would be power 0.5, which would be square root. Try that and see how it looks. Create another new data set by bringing in everything from old one and make a scatterplot: data trans; set util; rtdemand=sqrt(demand); proc sgplot; scatter x=usage y=rtdemand; 53 / 101

New scatterplot 54 / 101

Regression with new response variable Scatter plot still looks straight. Data set trans is most recently-created (default) one, so used in scatterplot above and proc reg below. We save residuals and fitted values in output data set, which then becomes default: proc reg; model rtdemand=usage; output out=outrt r=res p=fit; 55 / 101

Output The REG Procedure Model: MODEL1 Dependent Variable: rtdemand Root MSE 0.46404 R-Square 0.6485 Dependent Mean 1.68040 Adj R-Sq 0.6416 Coeff Var 27.61503 Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > t Intercept 1 0.58223 0.12993 4.48 <.0001 usage 1 0.00095286 0.00009824 9.70 <.0001 56 / 101

Comments R-squared actually decreased (from 70% to 65%). Slope still strongly significant. Should take a look at residuals now: proc sgplot; scatter x=fit y=res; 57 / 101

Residual plot for 2nd regression 58 / 101

Comments; scatter plot with line on it Better. No trends, constant variability. One mildly suspicious outlier at the bottom. Can trust this regression. Better a lower R-squared from a regression we can trust than a higher one from one we cannot. Make scatterplot of rtdemand against usage with regression line on it. proc sgplot; scatter x=usage y=rtdemand; reg x=usage y=rtdemand; 59 / 101

Scatterplot with fitted line 60 / 101

Predictions When we transformed the response variable, have to think carefully about predictions. Using usage=1000: int=0.58223 slope=0.00095286 pred=int+slope*1000 pred ## [1] 1.53509 It s a prediction, but of the response variable in regression, which was rtdemand, square root of demand. To predict actual demand, need to undo the transformation. Undoing square root is squaring: pred^2 ## [1] 2.356501 61 / 101

More predictions For usage 1000, 2000, 3000 all at once: usage=c(1000,2000,3000) rt.demand=int+slope*usage demand=rt.demand^2 demand ## [1] 2.356501 6.189895 11.839173 Transformations are non-linear changes. Here, though the usage values equally spaced, predicted demand values are not. Larger gap between 2nd and 3rd than 1st and 2nd. 62 / 101

Asphalt

The asphalt data 31 asphalt pavements prepared under different conditions. How does quality of pavement depend on these? Variables: pct.a.surf The percentage of asphalt in the surface layer pct.a.base The percentage of asphalt in the base layer fines The percentage of fines in the surface layer voids The percentage of voids in the surface layer rut.depth The change in rut depth per million vehicle passes viscosity The viscosity of the asphalt run 2 data collection periods: run 1 for run 1, 0 for run 2. rut.depth response. Depends on other variables, how? R this time. 64 / 101

Getting set up Read in data: asphalt=read.table("asphalt.txt",header=t) head(asphalt) ## pct.a.surf pct.a.base fines voids rut.depth viscosity run ## 1 4.68 4.87 8.4 4.916 6.75 2.8 1 ## 2 5.19 4.50 6.5 4.563 13.00 1.4 1 ## 3 4.82 4.73 7.9 5.321 14.75 1.4 1 ## 4 4.85 4.76 8.3 4.865 12.60 3.3 1 ## 5 4.86 4.95 8.4 3.776 8.25 1.7 1 ## 6 5.16 4.45 7.4 4.397 10.67 2.9 1 Quantitative variables with one response: multiple regression. Some issues here that don t come up in simple regression; handle as we go. (STAB27 ideas.) 65 / 101

Plotting response rut depth against everything else Same idea as for plotting separate predictions on one plot: as2=gather(asphalt,xname,x,c(pct.a.surf:voids,viscosity:run)) sample_n(as2,8) # 8 random lines from data frame ## rut.depth xname x ## 2 13.00 pct.a.surf 5.190 ## 42 3.58 pct.a.base 4.820 ## 40 12.58 pct.a.base 4.840 ## 72 20.60 fines 7.700 ## 60 0.76 pct.a.base 4.700 ## 102 12.58 voids 4.865 ## 12 7.00 pct.a.surf 4.610 ## 143 1.44 viscosity 50.000 g=ggplot(as2,aes(x=x,y=rut.depth))+geom_point()+ facet_wrap(~xname,scales="free") 66 / 101

The plot g fines pct.a.base pct.a.surf 20 20 20 10 10 10 rut.depth 0 0 0 6 7 8 4.4 4.6 4.8 5.0 4.0 4.5 5.0 run viscosity voids 20 20 20 10 10 10 0 0 0 0.00 0.25 0.50 0.75 1.00 0 100 200 300 400 500 4.0 4.5 5.0 5.5 6.0 x 67 / 101

Interpreting the plots One plot of rut depth against each of the six other variables. Get rough idea of what s going on. Trends mostly weak. Viscosity has strong but non-linear trend. Run has effect but variability bigger when run is 1. Weak but downward trend for voids. Non-linearity of rut.depth-viscosity relationship should concern us. Take this back to asphalt engineer: suggests log of viscosity. 68 / 101

Log of viscosity : more nearly linear? Create new variable in data frame to hold log of viscosity: asphalt$log.viscosity=log(asphalt$viscosity) ggplot(asphalt,aes(y=rut.depth,x=log.viscosity))+ geom_point()+geom_smooth(se=f) 20 rut.depth 10 0 0 2 4 6 log.viscosity 69 / 101

Comments and next steps Not very linear, but better than before. In multiple regression, hard to guess which x s affect response. So typically start by predicting from everything else. Model formula has response on left, squiggle, explanatories on right joined by plusses: rut.1=lm(rut.depth~pct.a.surf+pct.a.base+fines+ voids+log.viscosity+run,data=asphalt) 70 / 101

Regression output summary(rut.1) ## ## Call: ## lm(formula = rut.depth ~ pct.a.surf + pct.a.base + fines + voids + ## log.viscosity + run, data = asphalt) ## ## Residuals: ## Min 1Q Median 3Q Max ## -4.1211-1.9075-0.7175 1.6382 9.5947 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) -12.9937 26.2188-0.496 0.6247 ## pct.a.surf 3.9706 2.4966 1.590 0.1248 ## pct.a.base 1.2631 3.9703 0.318 0.7531 ## fines 0.1164 1.0124 0.115 0.9094 ## voids 0.5893 1.3244 0.445 0.6604 ## log.viscosity -3.1515 0.9194-3.428 0.0022 ** ## run -1.9655 3.6472-0.539 0.5949 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.324 on 24 degrees of freedom ## Multiple R-squared: 0.806,Adjusted R-squared: 0.7575 ## F-statistic: 16.62 on 6 and 24 DF, p-value: 1.743e-07 71 / 101

Comments At bottom: R-squared 81%, not so bad. F -statistic and P-value assert that something helping to predict rut.depth. Table of coefficients says log.viscosity. But confused by clearly non-significant variables: remove those to get clearer picture of what is helpful. Before we do anything, look at residual plots: (a) of residuals against fitted values (as usual) (b) of residuals against each explanatory. Problem with (a): fix response variable; problem with some plots in (b): fix those explanatory variables. 72 / 101

Add fits and residuals to data frame asphalt$fit.1=fitted(rut.1) asphalt$res.1=resid(rut.1) Construct plot of residuals against fitted, normal way: g=ggplot(asphalt,aes(y=res.1,x=fit.1))+geom_point()+ geom_smooth(se=f) Plot residuals against all six x s in one go. Same strategy as before, but noting that now want log.viscosity instead of viscosity: as2=gather(asphalt,xname,x, c(pct.a.surf:voids,run:log.viscosity)) g2=ggplot(as2,aes(y=res.1,x=x))+geom_point()+ facet_wrap(~xname,scales="free") 73 / 101

Residuals vs. fitted g 10 5 res.1 0 0 5 10 15 fit.1 74 / 101

Residuals vs. x s g2 fines log.viscosity pct.a.base pct.a.surf run voids 0 5 10 0 5 10 0 5 10 0 5 10 0 5 10 0 5 10 6 7 8 0 2 4 6 4.4 4.6 4.8 5.0 4.0 4.5 5.0 0.00 0.25 0.50 0.75 1.00 4.0 4.5 5.0 5.5 6.0 x res.1 75 / 101

Comments There is serious curve in plot of residuals vs. fitted values. Suggests a transformation of y. The residuals-vs-x s plots don t know any serious trends. Worst probably that potential curve against log-viscosity. Also, large positive residual, 10, that shows up on all plots. Perhaps transformation of y will help with this too. If residual-fitted plot OK, but some residual-x plots not, try transforming those x s, eg. by adding x 2 to help with curve. 76 / 101

Which transformation? I have no idea what would be a good transformation. Box-Cox again? boxcox(rut.depth~pct.a.surf+pct.a.base+fines+voids+ log.viscosity+run,data=asphalt) 77 / 101

Box-Cox plot log Likelihood 120 80 60 40 20 95% 2 1 0 1 2 λ 78 / 101

Comments on Box-Cox plot Best single choice of transformation parameter λ is peak of curve, close to 0. Vertical dotted lines give CI for λ, about ( 0.05, 0.2). λ = 0 means log. Narrowness of confidence interval mean that these not supported by data: No transformation (λ = 1) Square root (λ = 0.5) Reciprocal (λ = 1). So new response variable is asphalt$log.rut.depth=log(asphalt$rut.depth) 79 / 101

Relationships with explanatories As before: plot response (now log.rut.depth) against other explanatory variables, all in one shot: as2=gather(asphalt,xname,x, c(pct.a.surf:voids,run:log.viscosity)) g3=ggplot(as2,aes(y=log.rut.depth,x=x))+geom_point()+ facet_wrap(~xname,scales="free") 80 / 101

The new plots g3 fines log.viscosity pct.a.base pct.a.surf run voids 1 0 1 2 3 1 0 1 2 3 1 0 1 2 3 1 0 1 2 3 1 0 1 2 3 1 0 1 2 3 6 7 8 0 2 4 6 4.4 4.6 4.8 5.0 4.0 4.5 5.0 0.00 0.25 0.50 0.75 1.00 4.0 4.5 5.0 5.5 6.0 x log.rut.depth 81 / 101

Modelling with transformed response These trends look pretty straight, especially with log.viscosity. Values of log.rut.depth for each run have same spread. Other trends weak, but are straight if they exist. Start modelling from the beginning again. Model log.rut.depth in terms of everything else, see what can be removed. Check resulting residual plots. Off we go: rut.2=lm(log.rut.depth~pct.a.surf+pct.a.base+ fines+voids+log.viscosity+run,data=asphalt) 82 / 101

Output summary(rut.2) ## ## Call: ## lm(formula = log.rut.depth ~ pct.a.surf + pct.a.base + fines + ## voids + log.viscosity + run, data = asphalt) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.53072-0.18563-0.00003 0.20017 0.55079 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) -1.57299 2.43617-0.646 0.525 ## pct.a.surf 0.58358 0.23198 2.516 0.019 * ## pct.a.base -0.10337 0.36891-0.280 0.782 ## fines 0.09775 0.09407 1.039 0.309 ## voids 0.19885 0.12306 1.616 0.119 ## log.viscosity -0.55769 0.08543-6.528 9.45e-07 *** ## run 0.34005 0.33889 1.003 0.326 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.3088 on 24 degrees of freedom ## Multiple R-squared: 0.961,Adjusted R-squared: 0.9512 ## F-statistic: 98.47 on 6 and 24 DF, p-value: 1.059e-15 83 / 101

Taking out everything non-significant Try: remove everything but pct.a.surf and log.viscosity: rut.3=lm(log.rut.depth~pct.a.surf+log.viscosity,data=asphalt) Check that removing all those variables wasn t too much: anova(rut.3,rut.2) ## Model 2: log.rut.depth ~ pct.a.surf + pct.a.base + fines + voids + l ## Analysis of Variance Table ## ## Model 1: log.rut.depth ~ pct.a.surf + log.viscosity ## run ## Res.Df RSS Df Sum of Sq F Pr(>F) ## 1 28 2.8809 ## 2 24 2.2888 4 0.59216 1.5523 0.2191 H 0 : two models equally good; H a : bigger model better. Null not rejected here; small model as good as the big one, so prefer simpler smaller model rut.3. 84 / 101

Take out least significant in sequence summary(rut.2) ## ## Call: ## lm(formula = log.rut.depth ~ pct.a.surf + pct.a.base + fines + ## voids + log.viscosity + run, data = asphalt) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.53072-0.18563-0.00003 0.20017 0.55079 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) -1.57299 2.43617-0.646 0.525 ## pct.a.surf 0.58358 0.23198 2.516 0.019 * ## pct.a.base -0.10337 0.36891-0.280 0.782 ## fines 0.09775 0.09407 1.039 0.309 ## voids 0.19885 0.12306 1.616 0.119 ## log.viscosity -0.55769 0.08543-6.528 9.45e-07 *** ## run 0.34005 0.33889 1.003 0.326 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 85 / 101

Take out pct.a.base: rut.4=lm(log.rut.depth~pct.a.surf+fines+voids+log.viscosity+run, data=asphalt) summary(rut.4) ## ## Call: ## lm(formula = log.rut.depth ~ pct.a.surf + fines + voids + log.viscosity ## run, data = asphalt) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.51610-0.18785-0.02248 0.18364 0.57160 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) -2.07850 1.60665-1.294 0.2076 ## pct.a.surf 0.59299 0.22526 2.632 0.0143 * ## fines 0.08895 0.08701 1.022 0.3165 ## voids 0.20047 0.12064 1.662 0.1091 ## log.viscosity -0.55249 0.08184-6.751 4.48e-07 *** ## run 0.35977 0.32533 1.106 0.2793 ## --- 86 / 101

Update Another way to do the same thing: rut.4=update(rut.2,.~.-pct.a.base) summary(rut.4) ## ## Call: ## lm(formula = log.rut.depth ~ pct.a.surf + fines + voids + log.viscosity ## run, data = asphalt) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.51610-0.18785-0.02248 0.18364 0.57160 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) -2.07850 1.60665-1.294 0.2076 ## pct.a.surf 0.59299 0.22526 2.632 0.0143 * ## fines 0.08895 0.08701 1.022 0.3165 ## voids 0.20047 0.12064 1.662 0.1091 ## log.viscosity -0.55249 0.08184-6.751 4.48e-07 *** ## run 0.35977 0.32533 1.106 0.2793 87 / 101

Take out fines: rut.5=update(rut.4,.~.-fines) summary(rut.5) ## ## Call: ## lm(formula = log.rut.depth ~ pct.a.surf + voids + log.viscosity + ## run, data = asphalt) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.57275-0.20080 0.01061 0.17711 0.59774 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) -1.25533 1.39147-0.902 0.3753 ## pct.a.surf 0.54837 0.22118 2.479 0.0200 * ## voids 0.23188 0.11676 1.986 0.0577. ## log.viscosity -0.58039 0.07723-7.516 5.59e-08 *** ## run 0.29468 0.31931 0.923 0.3646 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## 88 / 101

Take out run: rut.6=update(rut.5,.~.-run) summary(rut.6) ## ## Call: ## lm(formula = log.rut.depth ~ pct.a.surf + voids + log.viscosity, ## data = asphalt) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.53548-0.20181-0.01702 0.16748 0.54707 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) -1.02079 1.36430-0.748 0.4608 ## pct.a.surf 0.55547 0.22044 2.520 0.0180 * ## voids 0.24479 0.11560 2.118 0.0436 * ## log.viscosity -0.64649 0.02879-22.458 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.3025 on 27 degrees of freedom 89 / 101

Comments Here we stop: pct.a.surf, voids and log.viscosity would all make fit significantly worse if removed. So they stay. Different final result from taking things out one at a time (top), than by taking out 4 at once (bottom): coef(rut.6) ## (Intercept) pct.a.surf voids log.viscosity ## -1.0207945 0.5554686 0.2447934-0.6464911 coef(rut.3) ## (Intercept) pct.a.surf log.viscosity ## 0.9001389 0.3911481-0.6185628 I like one-at-a-time method better, as general strategy. Point: Can make difference which way we go. 90 / 101

Comments on variable selection Best way to decide which x s belong: expert knowledge: which of them should be important. Best automatic method: what we did, backward selection. Do not learn about stepwise regression! See notes for a reference. R has function step that does backward selection, like this: step(rut.2,direction="backward") Gets same answer as we did (by removing least significant x). Removing non-significant x s may remove interesting ones whose P-values happened not to reach 0.05. Consider using less stringent cutoff like 0.20 or even bigger. Can also fit all possible regressions, as over (may need to do install.packages("leaps") first. 91 / 101

All possible regressions leaps=regsubsets(log.rut.depth~pct.a.surf+pct.a.base+fines+voids+ log.viscosity+run,data=asphalt,nbest=2) s=summary(leaps) data.frame(s$rsq,s$outmat) ## s.rsq pct.a.surf pct.a.base fines voids log.viscosity run ## 1 ( 1 ) 0.9452562 * ## 1 ( 2 ) 0.8624107 * ## 2 ( 1 ) 0.9508647 * * ## 2 ( 2 ) 0.9479541 * * ## 3 ( 1 ) 0.9578631 * * * ## 3 ( 2 ) 0.9534561 * * * ## 4 ( 1 ) 0.9591996 * * * * ## 4 ( 2 ) 0.9589206 * * * * ## 5 ( 1 ) 0.9608365 * * * * * ## 5 ( 2 ) 0.9593265 * * * * * ## 6 ( 1 ) 0.9609642 * * * * * * 92 / 101

Comments Problem: even adding a worthless x increases R-squared. So try for line where R-squared stops increasing too much, eg. top line (just log.viscosity), first 3-variable line (backwards-elimination model). One solution (STAB27): adjusted R-squared, where adding worthless variable makes it go down. 93 / 101

All possible regressions, adjusted R-squared data.frame(s$adjr2,s$outmat) ## s.adjr2 pct.a.surf pct.a.base fines voids log.viscosity run ## 1 ( 1 ) 0.9433685 * ## 1 ( 2 ) 0.8576662 * ## 2 ( 1 ) 0.9473550 * * ## 2 ( 2 ) 0.9442365 * * ## 3 ( 1 ) 0.9531812 * * * ## 3 ( 2 ) 0.9482845 * * * ## 4 ( 1 ) 0.9529226 * * * * ## 4 ( 2 ) 0.9526007 * * * * ## 5 ( 1 ) 0.9530038 * * * * * ## 5 ( 2 ) 0.9511918 * * * * * ## 6 ( 1 ) 0.9512052 * * * * * * Backward-elimination model comes out best again. 94 / 101

Revisiting the best model Best model was our rut.6: summary(rut.6) ## ## Call: ## lm(formula = log.rut.depth ~ pct.a.surf + voids + log.viscosity, ## data = asphalt) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.53548-0.20181-0.01702 0.16748 0.54707 ## ## Coefficients: ## Estimate Std. Error t value Pr(> t ) ## (Intercept) -1.02079 1.36430-0.748 0.4608 ## pct.a.surf 0.55547 0.22044 2.520 0.0180 * ## voids 0.24479 0.11560 2.118 0.0436 * ## log.viscosity -0.64649 0.02879-22.458 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.3025 on 27 degrees of freedom ## Multiple R-squared: 0.9579,Adjusted R-squared: 0.9532 ## F-statistic: 204.6 on 3 and 27 DF, p-value: < 2.2e-16 95 / 101

Revisiting (2) Regression slopes say that rut depth increases as log-viscosity decreases, pct.a.surf increases and voids increases. This more or less checks out with out scatterplots against log.viscosity. We should check residual plots again, though previous scatterplots say it s unlikely that there will be a problem. Get residuals and fitted values, add to data frame: asphalt$res.6=resid(rut.6) asphalt$fit.6=fitted(rut.6) Plot against each other: g=ggplot(asphalt,aes(y=res.6,x=fit.6))+geom_point() 96 / 101

Residuals against fitted values g 0.6 0.3 res.6 0.0 0.3 1 0 1 2 3 fit.6 97 / 101

Plotting residuals against x s Do our trick again to put them all on one plot: as2=gather(asphalt,xname,x, c(pct.a.surf:voids,run:log.viscosity)) g2=ggplot(as2,aes(y=res.6,x=x))+geom_point()+ facet_wrap(~xname,scales="free") 98 / 101

Residuals against the x s g2 fines log.viscosity pct.a.base pct.a.surf run voids 0.3 0.0 0.3 0.6 0.3 0.0 0.3 0.6 0.3 0.0 0.3 0.6 0.3 0.0 0.3 0.6 0.3 0.0 0.3 0.6 0.3 0.0 0.3 0.6 6 7 8 0 2 4 6 4.4 4.6 4.8 5.0 4.0 4.5 5.0 0.00 0.25 0.50 0.75 1.00 4.0 4.5 5.0 5.5 6.0 x res.6 99 / 101

Comments None of the plots show any sort of pattern. The points all look random on each plot. On the plot of fitted values (and on the one of log.viscosity), the points seem to form a left half and a right half with a gap in the middle. This is not a concern. One of the pct.a.surf values is low outlier (4), shows up top left of that plot. Only two possible values of run; the points in each group look randomly scattered around 0. Residuals seem to go above zero further than below, suggesting a mild non-normality, but not enough to be a problem. 100 / 101

Variable-selection strategies Expert knowledge. Backward elimination. All possible regressions. Taking a variety of models to experts and asking their opinion. Use a looser cutoff to eliminate variables in backward elimination (eg. only if P-value greater than 0.20). If goal is prediction, eliminating worthless variables less important. If goal is understanding, want to eliminate worthless variables where possible. Results of variable selection not always reproducible, so caution advised. 101 / 101