The following procedures and commands, are covered in this part: Command Purpose Page

Size: px
Start display at page:

Download "The following procedures and commands, are covered in this part: Command Purpose Page"

Transcription

1 Some Procedures in SPSS Part (2) This handout describes some further procedures in SPSS, following on from Part (1). Because some of the procedures covered are complex, with many sub-commands, the descriptions are more than ever not exhaustive, but they attempt to cover the options and variants used most often in the School of Behavioural Sciences. The following procedures and commands, are covered in this part: Command Purpose Page t-test Performs independent and paired t-tests 1 oneway Carries out one-way analysis of variance 2 manova Allows analysis of a variety of linear models 5 regression Performs regression analysis 22 logistic regression Does logistic regression analysis * Not available yet. The dataset used in many of the examples is based on the INTRO.DAT file used in DIYS; in fact, it's the system file saved towards the end of DIYS, which includes various computed and recoded variables. A listing of the data is included in the appendix to Part (1) of this handout. t-test The t-test for independent groups can be carried out with the following command: t-test groups=sex(1,2) vars=pretest. As well as showing the results for the actual t-test, the output below reports an F-value for a test of the homogeneity of the variances of the two groups in terms of the dependent variable (pretest). In this case, there is no suggestion that the F-test is significant (p =.835), so we would not hesitate to use the t-test results based on the Pooled Variance Estimate. This gives a result equivalent to carrying out a one-way analysis of variance. If the variances of the two groups differ markedly, a t-test based on Separate Variance Estimates may be preferable. For this form of the test, the degrees of freedom are adjusted in accordance with the extent of the difference between the variances, and may not be an integer value. t-tests for independent samples of SEX GROUP 1 - SEX EQ 1: Male GROUP 2 - SEX EQ 2: Female Variable Number Standard Standard of Cases Mean Deviation Error PRETEST Mean of two pretest scores GROUP GROUP Pooled Variance Estimate Separate Variance Estimate F 2-tail t Degrees of 2-tail t Degrees of 2-tail Value Prob. Value Freedom Prob. Value Freedom Prob

2 -2- Although only one grouping variable can be named after the groups= command, multiple dependent variables can be given in the vars= command. A further point is that the codes given in the groups= command can be used to identify two groups from a larger set. For example, the difference in pretest scores between groups 1 and 3 in the present dataset (ignoring the data for groups 2, 4 and 5) could be tested with t-test groups=group(1,3) vars=pretest. The t-test for correlated or paired variables can be carried out with this command: t-test pairs=pretest post delay. Measures which should be treated as correlated can arise from repeated measures on the same subjects, as in this example, or from paired cases such as husband and wife, or mother and child. In the above form, the t-test command calls for a comparison between each pair of variables (i.e., three comparisons). The results of the first are shown below. Along with the t-tests for paired samples Number of 2-tail Variable pairs Corr Sig Mean SD SE of Mean PRETEST Mean of two pretest scores POST Paired Differences Mean SD SE of Mean t-value df 2-tail Sig % CI (-.575,.808) results of the t-test the procedure prints out the correlation between the variables. One nice feature of t-test is that tests between a series of pairs of variables can be specified economically and in a way which avoids the calculation of irrelevant tests. For example, say we had before and after measurements on three variables, A, B and C, namely A1, A2, B1, B2, C1 and C2. The command t-test pairs=a1 B1 C1 with A2 B2 C2 (pairs) would do the three tests needed. Finally, a one-sample t-test can be performed by creating a new variable with a constant value and entering it as one of the variables in the test. For example, an alternative way of doing the test reported above would be to calculate the difference between the pretest and posttest scores with compute diff=post - pretest, create a variable with a value of zero with compute zero=0, and then use t-test pairs=zero diff. (It's worth noting that this is the way Minitab handles paired t-tests.) Another application of the one-sample test is to compare the results of your subjects on a standard test with a score which has been derived from a large standardization sample and which can be regarded as a population parameter. In this case, the compute command would be used to create a variable equal to the population value which would be entered into the paired t-test with the variable representing your subjects' scores. Oneway The main virtue of the oneway procedure is that it allows a priori and post hoc tests to be carried out for one-way analyses of variance. These features will be illustrated with

3 -3- the full (n=70) version of the dataset from which intro.dat was drawn. The dependent variable is a difference score, so in effect we will be studying a group by time interaction, but of course everything said applies equally to simple dependent variables (i.e., those which aren't difference scores). A number of dependent variables can be specified at once, but in our example we'll use just one: oneway diff1 by group (1,5)/ statistics=descriptives/ contrast=-1,-1,-1,1.5,1.5/ contrast=0,0,0,-1,1/ ranges=lsdmod. The numbers in brackets after the grouping variable (group) give the codes for the lowest and highest groups to be included in the analysis. Unlike the manova procedure, described later, oneway doesn't object if some of the codes specified or implied don't actually occur. As can be seen, the procedure first prints out the basic analysis of variance results, followed by the means and standard deviations, etc, as requested with statistics=descriptives O N E W A Y Variable DIFF1 By Variable GROUP Analysis of Variance Sum of Mean F F Source D.F. Squares Squares Ratio Prob. Between Groups Within Groups Total Standard Standard Group Count Mean Deviation Error 95 Pct Conf Int for Mean Role pla TO Discussi TO RP + Dis TO Contact TO No-Cont TO Total TO GROUP MINIMUM MAXIMUM Role pla Discussi RP + Dis Contact No-Cont TOTAL Next, the contrast coefficients for the a priori contrasts are shown, followed by the results of t-tests of the contrasts, using both pooled and separate variance estimates (see the comments on the t-test procedure given earlier). In this example, the first contrast compares the three experimental groups with the two control groups, while the second ignores the experimental groups and compares the two control groups. With two a priori

4 -4- contrasts, the analyst could use a standard significance level (e.g.,.05) for each, or might feel moved to maintain an overall level (e.g.,.05) by applying the Bonferroni adjustment, O N E W A Y Variable DIFF1 By Variable GROUP Contrast Coefficient Matrix Role pla RP + Dis No-Cont. Discussi Contact Contrast Contrast Pooled Variance Estimate Value S. Error T Value D.F. T Prob. Contrast Contrast Separate Variance Estimate Value S. Error T Value D.F. T Prob. Contrast Contrast so that the level for each would be.05/2 =.025. e final part of the output gives the results of post hoc tests, in this case using the Bonferroni adjustment, produced by ranges=lsdmod. The method used by oneway to calculate the critical difference is equivalent to using the equation critical difference = t α/2c x (2mse/n) where, in this case α =.05, c, the number of pairwise comparisons, is 10 [k(k - 1)/2, where k is the number of groups], mse from the analysis of variance table is , and n is 14, the number of cases in each group. The critical t-value for a probability of.05/(2 x 10) =.0025 with 65 degrees of freedom (again from the analysis of variance table)is 2.906, so the critical difference is equal to x (2 x / 14 ) = SPSS provides a map

5 -5- Variable DIFF1 By Variable GROUP O N E W A Y Multiple Range Tests: Modified LSD (Bonferroni) test with significance level.05 The difference between two means is significant if MEAN(J)-MEAN(I) >= * RANGE * SQRT(1/N(I) + 1/N(J)) with the following value(s) for RANGE: 4.11 (*) Indicates significant differences which are shown in the lower triangle Mean GROUP N D R R C o i P o o - s l n C c + e t o u a n s D p c t s i l t. i s a.5714 Contact No-Cont Discussi RP + Dis Role pla * * Homogeneous Subsets (highest and lowest means are not significantly different) Subset 1 Group Contact No-Cont. Discussi RP + Dis Mean Subset 2 Group Discussi RP + Dis Role pla Mean showing which pairs of means are different and also a listing of subsets of groups whose means are not significantly different. In the event of conflict between the two forms of results, take the homgeneous subset listing to be correct (Toothaker, 1991). In this example, the role-play group is significantly different from the two control groups; there are no other differences. When a relatively large number of comparisons is involved, the lsdmod option becomes conservative. You may wish to try two of the other options, tukey and tukeyb, although it's worth noting that, in this case, these two tests (output not shown) produced the same results as lsdmod. For further information on multiple comparisons, see the book by Toothaker mentioned above. Manova The manova procedure is a world of its own, and many types of analyses are possible, each with a large array of options, so that a book would be required to do the procedure justice. Anything ressembling a full coverage is not possible, but an attempt will be made to cover the main capabilities and the options which are most often used in the Department of Psychology.

6 -6- Univariate analysis of variance Although the name of the procedure suggests, quite rightly, that manova can carry out analyses with more than one dependent variable at a time (multivariate analysis of variance) it can also be used for univariate analyses, so that it pretty much makes the anova procedure redundant (anova isn't covered here, for that reason). A basic two-way analysis of variance could be carried out with the following: manova diff1 by sex (1,2) group (1,5)/ method=sstype(sequential)/ print=cellinfo(means)/ design. Again, the single dependent variable is a difference score, but this wouldn't always be the case. Some points to note: 1. No matter how many categorical classification (or predictor) variables there are -- sex and group in this case, by occurs only once. 2. The numbers in the brackets after each classifying variable show the lowest, then the highest, values that the variable takes. Unlike oneway, manova objects if some of the implied values don't occur. For example if, for some reason, there were no cases for which group was equal to 3, SPSS would print an error message pointing out that this category was empty, and stop. You may need to use the SPSS recode command to ensure that all values between the highest and lowest values of a classification variable occur. 3. The method sub-command is used here to request a sequential partitioning of the sums of squares. In this case, where there are equal numbers of males and females in each group, this is not too important, but when the design is non-orthogonal, the results obtained with the sequential option will differ from those obtained with the alternative, unique partitioning. This will be discussed later. When in doubt, use the sequential option. (Note that if you are using SPSS for Windows, the form of the sub-command is simply method=sequential.) 4. The design sub-command is here given without any further information. Manova will carry out the default analysis, which is a full factorial model including the terms sex, group and the sex by group interaction. The manova output contains the cell means, as requested, and a conventional analysis of variance table. * * * * * * A n a l y s i s o f V a r i a n c e * * * * * * 70 cases accepted. 0 cases rejected because of out-of-range factor values. 0 cases rejected because of missing data. 10 non-empty cells. 1 design will be processed.

7 -7- Cell Means and Standard Deviations Variable.. DIFF1 FACTOR CODE Mean Std. Dev. N SEX FEMALE GROUP Role pla GROUP Discussi GROUP RP + Dis GROUP Contact GROUP No-Cont SEX MALE GROUP Role pla GROUP Discussi GROUP RP + Dis GROUP Contact GROUP No-Cont For entire sample * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Tests of Significance for DIFF1 using SEQUENTIAL Sums of Squares Source of Variation SS DF MS F Sig of F WITHIN CELLS SEX GROUP SEX BY GROUP (Model) (Total) R-Squared =.305 Adjusted R-Squared =.200 A More Complicated Analysis of Variance The basic commands used above will now be augmented with some sub-commands and options which will give us some more information which might be useful, and will illustrate flexibilities in specifying the design. Note that the next example crowds a number of options and sub-commands together for illustrative purposes: in most cases they don't have to be used together, and you might never need to carry out an analysis which looks like the following. manova diff1 by sex (1,2) group (1,5)/ contrast(sex)=simple(1)/ contrast(group)=special ( )/ partition(group)/ method=sstype(unique)/ print=param(estim) signif(efsize)/ design=constant sex group/ design=constant group(1) group(2) group(3) group(4).

8 -8- * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Tests of Significance for DIFF1 using UNIQUE sums of squares Source of Variation SS DF MS F Sig of F WITHIN+RESIDUAL CONSTANT SEX GROUP (Corrected Model) (Corrected Total) R-Squared =.230 Adjusted R-Squared =.170 Effect Size Measures Partial Source of Variation ETA Sqd SEX.032 GROUP Individual univariate.9500 confidence intervals CONSTANT SEX GROUP * * * * * * A n a l y s i s o f V a r i a n c e -- design 2 * * * * * * Tests of Significance for DIFF1 using UNIQUE sums of squares Source of Variation SS DF MS F Sig of F WITHIN+RESIDUAL CONSTANT GROUP(1) GROUP(2) GROUP(3) GROUP(4) (Corrected Model) (Corrected Total) R-Squared =.205 Adjusted R-Squared =.156

9 -9- Effect Size Measures Source of Variation Partial ETA Sqd GROUP(1).133 GROUP(2).004 GROUP(3).016 GROUP(4) Individual univariate.9500 confidence intervals CONSTANT GROUP(1) GROUP(2) GROUP(3) GROUP(4) bytes of memory are needed for MANOVA execution. Preceding task required 1.05 seconds elapsed. Contrasts The two sub-commands following the first line of the manova command ask manova to set up particular contrasts, or comparisons between categories in terms of the dependent variable. If you do not include a contrast sub-command, manova uses what are called deviation contrasts. These lead to regression coefficients which show the difference between the mean for each category (except the last) of a classification variable (sex and group in this case) and the grand mean. For example, the means for groups 1 to 5 in the present study are 5.21, 1.82, 3.86,.57 and 1.25, giving a grand mean of The first set of contrasts used by manova if left to its own devices would be [5.21 * (1-1/5)] - [1.82 * 1/5] - [3.86 * 1/5] - [.57 * 1/5] - [1.25 * 1/5]

10 -10- = 2.67 = The value 2.67 is therefore the difference between the grand mean (2.54) and the mean of the first category (5.21). The divisor, 5 in this case, is equal to the number of levels in the factor. While deviation contrasts are perfectly acceptable, other contrasts may sometimes be desirable. In this example, for the sex factor we ask for a simple contrast between male and female, using the first category, females in this case, as the reference category. Manova uses the coefficients -1 for females and 1 for males, so that a positive regression weight will indicate that males have a higher mean diff1 score than females. In fact, the results above show that the relevant regression coefficient is -1.23, meaning that on average females score 1.23 higher than males, although the difference is not significant (p =.148). If a reference category is not specified for simple contrasts, manova uses the last category. The contrast specified for group is a little more ambitious, in that we don't ask for one of the "stock" ones supplied by manova (which include, for example, deviation, simple, repeated, helmert, difference and polynomial, the last three being orthogonal, an important point when we deal with repeated measures designs), but use the special keyword and introduce our own contrasts. The first line of ones simply creates the constant. The first contrast (-,2,-2,-2,3,3) compares the three experimental groups with the two control groups, the second compares the two control groups, the third compares the Role Play group with the Role Play + Discussion group and the fourth compares the Discussion group with the Role Play + Discussion group. The first, second and third contrasts are orthogonal to (or independent of) each other, in that they meet two criteria: (1) The coefficients for a given contrast sum to zero, and (2) the products of corresponding coefficients for any pair of contrasts also sum to zero. For example, using the first and third contrasts: (-2 x -1) + (-2 x 0) + (-2 x 1) + (3 x 0) + (3 x 0) = 0. The fourth contrast is not orthogonal to the others (you may like to verify this using the above rules). A good discussion of the benefits of orthogonal contrasts can be found in Harris (1994, Pp ). The results for individual contrasts can be see in two ways. One way is to request the regression parameters by including param(estim) in the print sub-command. In the present output, for example, the first estimate for group is This suggests that the combined mean for the experimental groups is 16.3 higher than the combined mean for the two control groups. The t-value for the parameter shows that the difference is significant at the.002 level. Notice that the word constant is included in the design statement; by default manova always fits a constant, but its value is not reported with the other parameter estimates unless constant is included in the design statement. Another way of examining the results for individual contrasts is to partition the overall effect and test the significance of each subset. In this example we used the partition(group) sub-command without giving any extra information, which asks that single degree-of-freedom effects be generated. Once partitions have been created, it is possible to refer to each with names of the form effectname(n). In this case we have group(1), which refers to the -2,-2,-2,3,3 contrast, group(2), which refers to the 0,0,0,-1,1 contrast, etc. As can be seen, each contrast is referred to separately in the design sub-command (about which more later), and individual results are given in the analysis of variance table. These correspond exactly to those given by the parameter estimates. For example, the F-ratio for the first contrast, 9.93, is equal to the square of the t-value for the regression weight for the same contrast.

11 -11- The Method Used to Calculate the Sums of Squares In this example, the method= sub-command specifies the unique method for partitioning the sums of squares. In this case, because the design is orthogonal (the number of subjects is the same in each of the 2 x 5 cells), the results obtained for a unique partition (sometimes called the regression approach) do not differ from those obtained for a sequential partition (sometimes called hierarchical decomposition). However, when designs are not orthogonal, and the terms in the model are correlated, the results obtained for the two methods will almost certainly differ, sometimes quite markedly. Under both options the effects of variables, factors or terms in the model may be considered with other terms in the model "adjusted for" or "held constant"; the difference between the options lies in exactly which terms are adjusted for or held constant. First, though, what does it mean to adjust for a variable, or hold it constant? One way of thinking about this is in terms of residuals. Say we have two variables A and B. If B is regressed on A alone (i.e., B is treated as the dependent variable and A as the predictor), the regression model can be used to predict a value of B for each case, giving rise to a predicted value B'. The difference between the the observed value (B) and the predicted ("expected") value (B') for each case (Observed -Expected) is called the residual, B - B'. If a dependent variable Y is then regressed on B - B', we can say that the effect of B on Y is being examined with A held constant; that is, the part of B which can be predicted by A has been removed, and what is left is that part of B which cannot be predicted from A. If the variables A and B are independent and therefore uncorrelated, A will not predict any part of B, and "holding A constant" will have no effect on the relationship between B and Y. We return now to the unique and sequential options. With the unique option, the effect of each term in the model is examined with all other terms held constant. For example, in our example with sex and group, the effect of sex is examined with group held constant, while the effect of group is examined with sex held constant. If the model included the sex by group interaction term, the other effects would each be examined with sex by group held constant as well, and of course the interaction term would be examined with both sex and group adjusted for. (It is clear why the unique option is sometimes called the regression approach: the effects of variables in a regression equation, as represented by the regression weights, are always examined with all other variables in the model held constant.) With the sequential option, on the other hand, each term in the model is examined with only the variables preceding it held constant. Thus in the sequential option, the order in which the variables are entered becomes important. If variables are specified in the design statement, as they are in the last example (sex then group), the variables are entered in the order given on the statement. So, in the example, the effect of sex would be examined with no other variable adjusted for, while the effect of group would be examined with sex held constant. (If design is given without naming any factors, they are entered in the order given on the manova statement.) As can be appreciated, very different results might be obtained for sex under the two options with a non-orthogonal design. It might be found to be strongly related to diff1 if entered first in a sequential analysis, but prove to be non-significant in a unique analysis, or if entered after group in a sequential analysis. It can be seen that having a clear idea of what questions we want answered, which can be translated into decisions about the order in which variables should be entered, is important if we are not to become confused by the different results which can be obtained. Part and Partial Correlation and Effect Size In the example described above, the effects of A were removed from B, and the relationship between a dependent variable Y and B - B' was examined. The correlation between the residuals B - B' and Y is called the part or semi-partial correlation between Y

12 -12- and B. The squared part-correlation can be obtained from the sums of squares printed out by the manova procedure. To illustrate this, we will consider a dataset (one which actually comes with SPSS for Windows, called bank.sav) which contains three correlated variables, beginning salary (salbeg), gender (sex) and education (edlevel). The following are analysis of variance tables for manovas with salbeg as the dependent variable and sex and edlevel as predictors. The first two tables (designs 1 and 2) give the results of sequential analyses with the two predictors entered in different orders. Here we are interested mainly in the sums of squares (SS) column. In the first table (design 1) sex is examined with edlevel held constant. The SS for sex under these circumstances is , and the squared partcorrelation between sex and salbeg is equal to the SS for sex over the total SS: / =.061 (this makes the actual part correlation = SQRT(.061) =.247). The value of.061 (or 6.1%) can be compared with the value of / =.209 or 20.9% obtained when sex is entered first in a sequential analysis (design 2) and is therefore not adjusted for any other variable. The square root of.209,.457, is the value we would obtain if we simply asked SPSS to calculate the Pearson correlation between sex and salbeg. Notice that the squared part-correlation is also obtainable from the results of the unique analysis (design 3). * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Tests of Significance for SALBEG using SEQUENTIAL Sums of Squares Source of Variation SS DF MS F Sig of F WITHIN+RESIDUAL EDLEVEL SEX (Model) (Total) R-Squared =.462 Adjusted R-Squared =.460 Effect Size Measures Partial Omega Source of Variation ETA Sqd ETA Sqd Squared EDLEVEL SEX * * * * * * A n a l y s i s o f V a r i a n c e -- design 2 * * * * * * Tests of Significance for SALBEG using SEQUENTIAL Sums of Squares Source of Variation SS DF MS F Sig of F WITHIN+RESIDUAL SEX EDLEVEL (Model) (Total) R-Squared =.462 Adjusted R-Squared =.460 Effect Size Measures Partial Omega Source of Variation ETA Sqd ETA Sqd Squared SEX EDLEVEL * * * * * * A n a l y s i s o f V a r i a n c e -- design 3 * * * * * *

13 -13- Tests of Significance for SALBEG using UNIQUE sums of squares Source of Variation SS DF MS F Sig of F WITHIN+RESIDUAL SEX EDLEVEL (Model) (Total) R-Squared =.462 Adjusted R-Squared =.460 Effect Size Measures Partial Source of Variation ETA Sqd SEX.102 EDLEVEL.320 Having dealt with part- or semi-partial correlation, we can now go the whole hog and talk about another way of assessing the effects of variables in a model, partial correlation. The basis for partial correlation can again be thought of in terms of residuals. In our example of part-correlation the relationship assessed was that between a dependent variable Y and a predictor B with the effects of A removed from B; in other words, between Y and B - B'. Partial correlation measures the relationship between B - B' and Y - Y'; in other words, between B from which the effects of A have been removed and Y from which the effects of A have also been removed. To paraphrase Cohen and Cohen (1975), the squared partial correlation in this case measures the proportion of variance of Y not associated with A which is associated with B. Once again the partial correlation can be estimated from the sums of squares (SS) printed out by manova. For example, the partial r- squared for sex in design 1 is equal to SS (sex)/(ss(total) - SS(edlevel)) = /( ) =.102. This makes the partial correlation equal to SQRT(.102) =.320. Because the formula for partial correlation is like that for semi-partial correlation except that the denominator is smaller, the partial correlation will always be larger than the semi-partial correlation. Another way of thinking about this in terms of our example is that when we deal with Y - Y' rather than Y, we have removed a part of Y which is not related to B, so we would expect the relationship between B - B' and what remains of Y to be, if anything, stronger than that between B - B' and the whole of Y. Having gone laboriously through the calculation of part and partial correlations using the sums of squares, we can now reveal that SPSS will do some of the calculations for you if you specify as a sub-command print=signif(efsize). This was used when the above output was produced, and also in the previous example based on the DIYS data, as can be seen, and we are now able to see exactly what SPSS has produced. With design 1, the coefficients in the Partial ETA Sqd column are in fact squared partial correlation coefficents, obtained from the formula of the type SS(A)/(Total SS - SS(B)) as described above. For edlevel, for example, this is equal to /( ). The results in the ETA Sqd column are squared part or semi-partial correlations, again calculuated as described above. The difference in the values given for the two variables in designs 1, 2 and 3 are simply due to the different orders of the variables and, in design 3, the different method used to partition the sums of squares. The results in the Omega Squared column are estimates of the association between the predictor variables and the dependent variables in the population. Hays (1963) argues that omega 2 is preferable to eta 2 for this reason and also because eta 2 is more appropriately applied to continuous variables than to categorical factors. Hays sees eta 2 as a descriptive statistic for the sample. An estimate of the statistic omega 2 is calculated with the following formula:

14 -14- For edlevl in design 1, for example SS(effect) - df(effect) x MS error MS error + SS (total) x 5.35 omega 2 = = Note that SPSS does not provide ETA Sqd or Omega Squared when the unique option is specified (design 3). Also, some versions of SPSS, including those currently on Hardy and Laurel, do not give the total sum of squares or R-squared with the unique option. The reason for this is that the partial sums of squares calculated with that option do not add up to the total sum of squares (as can be verified from the results for design 3). However, as long as this is understood, it is useful to have R-squared and the total SS in order to carry out some of the calculations described above without having to run a separate sequential analysis. Using Manova for Regression Analyses: Specifying Continuous Variables as Predictors Although manova is often used for analyses in which the independent variables or predictors are categorical, they do not have to be, and the procedure can be readily used to carry out regression analyses involving only continuous variables or a mixture of the two types. In some ways, manova is superior to the sometimes rather obscure regression procedure, and provides useful output (such as the sequential sums of squares) which regression does not. It is also sufficiently similar to Minitab's regression to make users of that excellent procedure feel at home. The following commands request a regression analysis with two predictor variables: recode sex (2=0). manova occ1 sex iq/ analysis=occ1/ method=sequential/ print=signif(efsize) param(estim)/ design=constant sex iq. Note that all three variables are entered directly after the manova command, and that there is no by statement. When variables are entered as above, manova regards them all as dependent variables until one (for a univariate analysis) or more is picked out with the analysis sub-command, occ1 in this case. The remaining variables are then free to be treated as independent or predictor variables in the design statement. In this example we are attempting to predict a pretest score (occ1) with sex and iq. The iq variable is continuous, but of course sex is dichotomous, and has been recoded to be 0 (males) and 1 (females). This recoding isn't necessary, but a binary variable with zero as one of the values has desirable properties when it comes to evaluating the regression equation (mainly that when the value is zero, terms in which the variable is involved simply disappear from the equation). Of course we could have entered sex after a by statement, as in the first example above; the advantage of entering it as in the present example is that we have complete control over the way it is coded. The output is shown below. Tests of Significance for OCC1 using SEQUENTIAL Sums of Squares

15 -15- Source of Variation SS DF MS F Sig of F WITHIN+RESIDUAL CONSTANT SEX IQ (Corrected Model) (Corrected Total) R-Squared =.269 Adjusted R-Squared =.247 Effect Size Measures Partial Omega Source of Variation ETA Sqd ETA Sqd Squared SEX IQ Estimates for OCC1 --- Individual univariate.9500 confidence intervals CONSTANT SEX IQ This is an interesting finding: the analysis of variance table shows that, in a sequential analysis, iq is highly significantly related to occ1 (almost 25% of the variance of occ1 in the sample is accounted for by iq), but that sex, entered first, is not. However, the regression coefficients, equivalent to a unique analysis, tell a slightly different story: when iq is held constant, sex is significantly related to occ1. In the light of this result, a further analysis is called for. It is worth looking at whether sex and iq interact (i.e., whether the relationship between iq and occ1 is different for males and females) and also using the unique option. The commands below are used to create an interaction term, which is included in the list of variables in the manova statement, and then to carry out a modified analysis. Note that interaction terms have to be created in this way only when neither of the variables involved come after a by statement.

16 -16- compute sxiq=sex*iq. manova occ1 sex iq sxiq/ analysis=occ1/ method=unique/ print=param(estim)/ design=constant sex iq sxiq/ design=constant sex iq. * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Order of Variables for Analysis Variates Covariates OCC1 1 Dependent Variable 0 Covariates * * * * * * A n a l y s i s o f V a r i a n c e -- design 1 * * * * * * Tests of Significance for OCC1 using UNIQUE sums of squares Source of Variation SS DF MS F Sig of F WITHIN+RESIDUAL CONSTANT SEX IQ SXIQ (Corrected Model) (Corrected Total) R-Squared =.272 Adjusted R-Squared =.239 (Parameter estimates omitted) * * * * * * A n a l y s i s o f V a r i a n c e -- design 2 * * * * * * Tests of Significance for OCC1 using UNIQUE sums of squares Source of Variation SS DF MS F Sig of F WITHIN+RESIDUAL CONSTANT SEX IQ (Corrected Model) (Corrected Total) R-Squared =.269 Adjusted R-Squared =.247 Estimates for OCC1 --- Individual univariate.9500 confidence intervals CONSTANT SEX

17 -17- IQ These results are quite clear: the interaction is not significant but, as the regression coefficients in the previous analysis showed, in a main effects model with both variables, both sex and iq make a significant (p <.05) contribution. Why is sex significantly related to occ1 only when iq is held constant? The answer is that the mean of iq is higher for boys than for girls (107.9 versus 102.2) and, since iq is correlated with occ1 (the within-cell correlation is.53 pooled over boys and girls), the boys' mean score on occ1 is higher for this reason alone. However, once the effect of iq is partialed out of sex, as described above, the effect of sex Plot of OCC1 with IQ by SEX O C C Sex 0 MALE FEMALE PEABODY PICTURE VOCAB TEST SCORE becomes significant. This can be illustrated with the graph below, which shows the relationship between iq (the Peabody score) and occ1 with separate symbols for boys and girls. The circles (girls) do appear to be higher (on average) than the squares (boys) at any given iq level, and when we control (another way of 'controlling') for iq by calculating mean occ1 scores for several ranges of iq (up to 95, , and 125+) the means are in fact higher for girls than for boys (not necessarily significantly) in each range. Creating Dummy Variables A more ambitious analysis will now be undertaken in which we will need to create dummy variables for a categorical variable with more than two categories. The analysis is an analysis of covariance, in which a post-test variable (occ21) is predicted from a grouping variable (group) while holding a pretest variable (occ1) constant. Two other variables, sex and iq, will also be included in the model to be analysed. The first task is to create dummy or indicator variables for group, which has five categories. The fifth category is the main control group, the "no contact controls", and this will used as the reference group for which all 5-1 dummy variables have the value of zero.

18 -18- The commands used to create the dummy variables are: do if (group eq 1). compute g1=1. else. compute g1=0. end if. do if (group eq 2). compute g2=1. else. compute g2=0. end if. do if (group eq 3). compute g3=1. else. compute g3=0. end if. do if (group eq 4). compute g4=1. else. compute g4=0. end if. This method has the advantage that it treats missing data appropriately; i.e., in this case if group was missing for a subject, the dummy variables g1, g2, g3 and g4 would all be missing for that case. If, as in this dataset, there are no missing data for the variable for which dummy variables are being created, the following form would be suitable: compute g1=0. if (group eq 1)g1=1. compute g2=0. if (group eq 2)g2=1. compute g3=0. if (group eq 3)g3=1. compute g4=0. if (group eq 4)g4=1. The following commands can then be used to perform the analysis of covariance: manova occ1 occ21 iq sex g1 to g4/ analysis=occ21/ method=sstype(unique)/ print=param(estim)/ design=constant occ1 iq sex g1+g2+g3+g4. This contains no surprises when compared to previous runs, except for the pluses in the design statement. These are used to ask manova to treat the four dummy variables

19 -19- Tests of Significance for OCC21 using UNIQUE sums of squares Source of Variation SS DF MS F Sig of F WITHIN+RESIDUAL CONSTANT OCC IQ SEX G1 + G2 + G3 + G (Corrected Model) (Corrected Total) R-Squared =.550 Adjusted R-Squared =.499 representing group as a single effect in the analysis of variance table, which is more useful for our purposes than having the sums of squares and F-ratios reported individually. The results below show quite clearly that, with occ1, sex and iq held constant, there are still significant differences between groups. This result will not be pursued here, but the same analysis will be taken up again when discussing another way of carrying out analyses of covariance with manova (not available in this version of the handout). It is worth noting, however, that if we wanted to test the homogenity of the regression slopes (the relationship between the covariate occ1 and the dependent variable occ21) over groups, the following commands could be used to create the necessary interaction terms and test the interaction: compute occ1g1=occ1*g1. compute occ1g2=occ1*g2. compute occ1g3=occ1*g3. compute occ1g4=occ1*g4. manova occ1 occ21 iq sex g1 to g4 occ1g1 occ1g2 occ1g3 occ1g4/ analysis=occ21/ method=sstype(unique)/ print=param(estim)/ design=constant occ1 iq sex g1+g2+g3+g4 occ1g1+occ1g2+occ1g3+occ1g4. Specifying categorical variables after the 'by' statement as well as using numeric variables as predictors If creating your own dummy variables and interaction terms seems to you to be a bit of a hassle, you'll be interested in the following example,. It carries out the same analyses as those described above, but the by statement is brought back to look after the categorical variables. Here, the advantage of using the by statement is that you don't have to create your own coding for sex and group, the interaction term is created simply by specifying sex by group, and simple contrasts can be easily specified. Perhaps the main disadvantage occurs when it comes to interpreting the regression equation: generally speaking, having zeroes in codes greatly simplifies interpretation, in part because parts of the equation drop out when zero values occur.

20 -20- manova occ1 occ21 iq by sex (1,2) group (1,5)/ analysis=occ21/ contrast(sex)=simple(2)/ contrast(group)=simple(5)/ method=sstype(unique)/ print=param(estim)/ design=constant occ1 iq sex group/ design=constant occ1 iq sex group sex by group. Residual Plots Finally, when using manova to carry out regression analyses (or any other analysis, for that matter), we will want to examine the residuals. The sub-command /resid=plot gives us the following important graphs, among others: Plots of Observed, Predicted, and Standardized Residual Case Values Observed VS. Predicted Values for OCC : : : : : 1 : : : : : : 1 : P : 4 : R : 2 3 : E : : D I : 1 2 : C : : T : : E : 1 : D : : : 2 1 : : : : 1 1 : : 1 : : : : : : : OBSERVED

21 -21- Predicted Values VS Std Resid. for OCC S : 1 : T : 1 : A : 3 : N : 1 : D A : : R : : D : : I : : Z E : : D : : : : R : : E S : 1 1 : I : 1 : D : : U : : A L : : S : 1 1 : : 1 : : : OBSERVED Normal Q-Q Plot : : : * : E 2 + * + x : * : p : * : e : * : c : ** : t 1 + ** + e : * : d : * : : ** : N : ** : o 0 + *** + r : ** : m : *** : a : ** : l : * : -1 + * + V : * : a : * : l : * : u : * : e -2 + * + : * : : : : : Standardized Residual of OCC21 The group of three cases with residuals of less than -2 in the plot of standardised residuals with predicted values stand out worryingly (they are due to a ceiling effect present in these

22 -22- data), and we would probably want to identify them. The addition of the keyword casewise to the resids sub-command gives a listing partly shown below. Observed and Predicted Values for Each Case Dependent Variable.. OCC21 CPT SCORE: OCCASION 2 FORM 1 Case No. Observed Predicted Raw Resid. Std Resid In order to have a closer look at the cases with the very low residuals, or to omit them from analyses, they have to be identified. The listing above gives a Case No. which refers to the order of each case in the file. In order to use this information to select cases, it is best to compute a case number variable (the system variable $casenum is not always suitable for this purpose). The following commands carry this out: compute casenum=casenum+1. leave casenum. temporary. select if (casenum ne 4 and casenum ne 7 and casenum ne 35). manova... Normally SPSS initialises each variable to system-missing for each case that it reads, so that a cumulation of values over cases would be impossible (in the present example, casenum would always equal system-missing). This is where the leave command comes in; it tells SPSS not to initialise casenum and, furthermore, sets its initial value to zero. So, for the first case (subject) in the file casenum is initially set to zero, so that adding one in the compute statement gives rise to a value of 1 for that subject. This remains unchanged for the next case, so that adding 1 gives rise to a value of 2 for this subject, and so on. The casenum variable can then be used to refer to particular cases. Regression Athough the manova procedure is probably preferable for most purposes, the regression procedure does a few things that the manova procedure doesn't do, and is worth knowing about for that reason. Regression has a better selection of diagnostics, some of which are mentioned at the end of this section. Another useful capability is that regression produces standardised regression weights (Betas or ßs) as well as the unstandardised weights produced by manova. Finally, and many would consider this a mixed blessing, regression has a number of stepwise options.

23 -23- The first example of regression repeats the analysis carried out earlier with manova: sex is recoded so that its values are 0 and 1 (this makes the meaning of the main effects in the presence of the interaction more sensible, but that's another story) and sex and iq are multiplied to form an interaction term. These variables are entered in the vars list of the regression command, together with occ1, which is then specified as the dependent variable in the dependent subcommand. In this example, we use the method=enter subcommand, which tells regression to enter all of the predictor variables into the model: recode sex (2=0). compute sxiq=sex*iq. regression vars=occ1 sex iq sxiq/ dependent=occ1/ method=enter sex iq sxiq. The results are the same as those produced by manova (as they should be) except that standardised regressions weights are shown. * * * * M U L T I P L E R E G R E S S I O N * * * * Listwise Deletion of Missing Data Equation Number 1 Dependent Variable.. OCC1 CPT SCORE: MEAN OF FORMS 1&2 Block Number 1. Method: Enter SEX IQ SXIQ Variable(s) Entered on Step Number 1.. SXIQ 2.. IQ PEABODY PICTURE VOCAB TEST SCORE 3.. SEX 1=FEMALE 2=MALE Multiple R R Square Adjusted R Square Standard Error Analysis of Variance DF Sum of Squares Mean Square Regression Residual F = Signif F = Variables in the Equation Variable B SE B Beta T Sig T SEX IQ SXIQ (Constant) End Block Number 1 All requested variables entered. The second example shows one way of using one of the stepwise facilities in regression. In this case we have occ21 as the dependent variable and occ1, iq and sex as predictor

24 -24- variables. We want to see whether iq and/or sex can be dropped from the model, but want to keep occ1 in regardless. The method=enter subcommand enters all the variables into the model on the first step. This results in the output shown on the next page. regression vars=occ21 occ1 iq sex/ dependent=occ21/ method=enter/stepwise=iq sex. * * * * M U L T I P L E R E G R E S S I O N * * * * Listwise Deletion of Missing Data Equation Number 1 Dependent Variable.. OCC21 CPT SCORE: OCCASION 2 FORM 1 Block Number 1. Method: Enter Variable(s) Entered on Step Number 1.. SEX 1=FEMALE 2=MALE 2.. OCC1 CPT SCORE: MEAN OF FORMS 1&2 OCCASION IQ PEABODY PICTURE VOCAB TEST SCORE Multiple R R Square Adjusted R Square Standard Error Analysis of Variance DF Sum of Squares Mean Square Regression Residual F = Signif F = Variables in the Equation Variable B SE B Beta T Sig T OCC IQ SEX (Constant) End Block Number 1 All requested variables entered. The continuation of the method subcommand, stepwise=iq sex, then tells regression to consider iq and sex for dropping, but not to consider occ1, which will stay in the model. As might be expected from the full-model result, iq is dropped (see output below). With iq removed, the significance level for sex is Because regression uses defaults of.05 for variables to enter the model (pin) and.10 for variables to be dropped from the model (pout), sex stays in the model. The pin and pout values can be changed with the criteria subcommand. For example, to adopt a pin of.01 and a pout of.011, the subcommand would be criteria=pin(.01) pout(.011), placed before the dependent subcommand. One point to note about the regression procedure is that it expects all variables to be numeric, so that if the 5-category group variable used in the previous examples was entered into the model as-is, regression would treat it as a numeric variable with values 1 to 5, so

Multiple Regression White paper

Multiple Regression White paper +44 (0) 333 666 7366 Multiple Regression White paper A tool to determine the impact in analysing the effectiveness of advertising spend. Multiple Regression In order to establish if the advertising mechanisms

More information

SPSS INSTRUCTION CHAPTER 9

SPSS INSTRUCTION CHAPTER 9 SPSS INSTRUCTION CHAPTER 9 Chapter 9 does no more than introduce the repeated-measures ANOVA, the MANOVA, and the ANCOVA, and discriminant analysis. But, you can likely envision how complicated it can

More information

6:1 LAB RESULTS -WITHIN-S ANOVA

6:1 LAB RESULTS -WITHIN-S ANOVA 6:1 LAB RESULTS -WITHIN-S ANOVA T1/T2/T3/T4. SStotal =(1-12) 2 + + (18-12) 2 = 306.00 = SSpill + SSsubj + SSpxs df = 9-1 = 8 P1 P2 P3 Ms Ms-Mg 1 8 15 8.0-4.0 SSsubj= 3x(-4 2 + ) 4 17 15 12.0 0 = 96.0 13

More information

Correctly Compute Complex Samples Statistics

Correctly Compute Complex Samples Statistics SPSS Complex Samples 15.0 Specifications Correctly Compute Complex Samples Statistics When you conduct sample surveys, use a statistics package dedicated to producing correct estimates for complex sample

More information

Week 4: Simple Linear Regression III

Week 4: Simple Linear Regression III Week 4: Simple Linear Regression III Marcelo Coca Perraillon University of Colorado Anschutz Medical Campus Health Services Research Methods I HSMP 7607 2017 c 2017 PERRAILLON ARR 1 Outline Goodness of

More information

Ronald H. Heck 1 EDEP 606 (F2015): Multivariate Methods rev. November 16, 2015 The University of Hawai i at Mānoa

Ronald H. Heck 1 EDEP 606 (F2015): Multivariate Methods rev. November 16, 2015 The University of Hawai i at Mānoa Ronald H. Heck 1 In this handout, we will address a number of issues regarding missing data. It is often the case that the weakest point of a study is the quality of the data that can be brought to bear

More information

Example Using Missing Data 1

Example Using Missing Data 1 Ronald H. Heck and Lynn N. Tabata 1 Example Using Missing Data 1 Creating the Missing Data Variable (Miss) Here is a data set (achieve subset MANOVAmiss.sav) with the actual missing data on the outcomes.

More information

Psychology 282 Lecture #21 Outline Categorical IVs in MLR: Effects Coding and Contrast Coding

Psychology 282 Lecture #21 Outline Categorical IVs in MLR: Effects Coding and Contrast Coding Psychology 282 Lecture #21 Outline Categorical IVs in MLR: Effects Coding and Contrast Coding In the previous lecture we learned how to incorporate a categorical research factor into a MLR model by using

More information

An Example of Using inter5.exe to Obtain the Graph of an Interaction

An Example of Using inter5.exe to Obtain the Graph of an Interaction An Example of Using inter5.exe to Obtain the Graph of an Interaction This example covers the general use of inter5.exe to produce data from values inserted into a regression equation which can then be

More information

SPSS QM II. SPSS Manual Quantitative methods II (7.5hp) SHORT INSTRUCTIONS BE CAREFUL

SPSS QM II. SPSS Manual Quantitative methods II (7.5hp) SHORT INSTRUCTIONS BE CAREFUL SPSS QM II SHORT INSTRUCTIONS This presentation contains only relatively short instructions on how to perform some statistical analyses in SPSS. Details around a certain function/analysis method not covered

More information

CDAA No. 4 - Part Two - Multiple Regression - Initial Data Screening

CDAA No. 4 - Part Two - Multiple Regression - Initial Data Screening CDAA No. 4 - Part Two - Multiple Regression - Initial Data Screening Variables Entered/Removed b Variables Entered GPA in other high school, test, Math test, GPA, High school math GPA a Variables Removed

More information

Teaching students quantitative methods using resources from the British Birth Cohorts

Teaching students quantitative methods using resources from the British Birth Cohorts Centre for Longitudinal Studies, Institute of Education Teaching students quantitative methods using resources from the British Birth Cohorts Assessment of Cognitive Development through Childhood CognitiveExercises.doc:

More information

Regression. Notes. Page 1 25-JAN :21:57. Output Created Comments

Regression. Notes. Page 1 25-JAN :21:57. Output Created Comments /STATISTICS COEFF OUTS CI(95) R ANOVA /CRITERIA=PIN(.05) POUT(.10) /DEPENDENT Favorability /METHOD=ENTER zcontemp ZAnxious6 zallcontact. Regression Notes Output Created Comments Input Missing Value Handling

More information

Applied Statistics and Econometrics Lecture 6

Applied Statistics and Econometrics Lecture 6 Applied Statistics and Econometrics Lecture 6 Giuseppe Ragusa Luiss University gragusa@luiss.it http://gragusa.org/ March 6, 2017 Luiss University Empirical application. Data Italian Labour Force Survey,

More information

Lecture 25: Review I

Lecture 25: Review I Lecture 25: Review I Reading: Up to chapter 5 in ISLR. STATS 202: Data mining and analysis Jonathan Taylor 1 / 18 Unsupervised learning In unsupervised learning, all the variables are on equal standing,

More information

One Factor Experiments

One Factor Experiments One Factor Experiments 20-1 Overview Computation of Effects Estimating Experimental Errors Allocation of Variation ANOVA Table and F-Test Visual Diagnostic Tests Confidence Intervals For Effects Unequal

More information

Introduction. About this Document. What is SPSS. ohow to get SPSS. oopening Data

Introduction. About this Document. What is SPSS. ohow to get SPSS. oopening Data Introduction About this Document This manual was written by members of the Statistical Consulting Program as an introduction to SPSS 12.0. It is designed to assist new users in familiarizing themselves

More information

Correctly Compute Complex Samples Statistics

Correctly Compute Complex Samples Statistics PASW Complex Samples 17.0 Specifications Correctly Compute Complex Samples Statistics When you conduct sample surveys, use a statistics package dedicated to producing correct estimates for complex sample

More information

Table Of Contents. Table Of Contents

Table Of Contents. Table Of Contents Statistics Table Of Contents Table Of Contents Basic Statistics... 7 Basic Statistics Overview... 7 Descriptive Statistics Available for Display or Storage... 8 Display Descriptive Statistics... 9 Store

More information

Statistical Good Practice Guidelines. 1. Introduction. Contents. SSC home Using Excel for Statistics - Tips and Warnings

Statistical Good Practice Guidelines. 1. Introduction. Contents. SSC home Using Excel for Statistics - Tips and Warnings Statistical Good Practice Guidelines SSC home Using Excel for Statistics - Tips and Warnings On-line version 2 - March 2001 This is one in a series of guides for research and support staff involved in

More information

Week 4: Simple Linear Regression II

Week 4: Simple Linear Regression II Week 4: Simple Linear Regression II Marcelo Coca Perraillon University of Colorado Anschutz Medical Campus Health Services Research Methods I HSMP 7607 2017 c 2017 PERRAILLON ARR 1 Outline Algebraic properties

More information

Right-click on whatever it is you are trying to change Get help about the screen you are on Help Help Get help interpreting a table

Right-click on whatever it is you are trying to change Get help about the screen you are on Help Help Get help interpreting a table Q Cheat Sheets What to do when you cannot figure out how to use Q What to do when the data looks wrong Right-click on whatever it is you are trying to change Get help about the screen you are on Help Help

More information

Lecture 1: Statistical Reasoning 2. Lecture 1. Simple Regression, An Overview, and Simple Linear Regression

Lecture 1: Statistical Reasoning 2. Lecture 1. Simple Regression, An Overview, and Simple Linear Regression Lecture Simple Regression, An Overview, and Simple Linear Regression Learning Objectives In this set of lectures we will develop a framework for simple linear, logistic, and Cox Proportional Hazards Regression

More information

Lab #9: ANOVA and TUKEY tests

Lab #9: ANOVA and TUKEY tests Lab #9: ANOVA and TUKEY tests Objectives: 1. Column manipulation in SAS 2. Analysis of variance 3. Tukey test 4. Least Significant Difference test 5. Analysis of variance with PROC GLM 6. Levene test for

More information

Statistics Lab #7 ANOVA Part 2 & ANCOVA

Statistics Lab #7 ANOVA Part 2 & ANCOVA Statistics Lab #7 ANOVA Part 2 & ANCOVA PSYCH 710 7 Initialize R Initialize R by entering the following commands at the prompt. You must type the commands exactly as shown. options(contrasts=c("contr.sum","contr.poly")

More information

An introduction to SPSS

An introduction to SPSS An introduction to SPSS To open the SPSS software using U of Iowa Virtual Desktop... Go to https://virtualdesktop.uiowa.edu and choose SPSS 24. Contents NOTE: Save data files in a drive that is accessible

More information

Laboratory for Two-Way ANOVA: Interactions

Laboratory for Two-Way ANOVA: Interactions Laboratory for Two-Way ANOVA: Interactions For the last lab, we focused on the basics of the Two-Way ANOVA. That is, you learned how to compute a Brown-Forsythe analysis for a Two-Way ANOVA, as well as

More information

Subset Selection in Multiple Regression

Subset Selection in Multiple Regression Chapter 307 Subset Selection in Multiple Regression Introduction Multiple regression analysis is documented in Chapter 305 Multiple Regression, so that information will not be repeated here. Refer to that

More information

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski Data Analysis and Solver Plugins for KSpread USER S MANUAL Tomasz Maliszewski tmaliszewski@wp.pl Table of Content CHAPTER 1: INTRODUCTION... 3 1.1. ABOUT DATA ANALYSIS PLUGIN... 3 1.3. ABOUT SOLVER PLUGIN...

More information

Panel Data 4: Fixed Effects vs Random Effects Models

Panel Data 4: Fixed Effects vs Random Effects Models Panel Data 4: Fixed Effects vs Random Effects Models Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised April 4, 2017 These notes borrow very heavily, sometimes verbatim,

More information

[/TTEST [PERCENT={5}] [{T }] [{DF } [{PROB }] [{COUNTS }] [{MEANS }]] {n} {NOT} {NODF} {NOPROB}] {NOCOUNTS} {NOMEANS}

[/TTEST [PERCENT={5}] [{T }] [{DF } [{PROB }] [{COUNTS }] [{MEANS }]] {n} {NOT} {NODF} {NOPROB}] {NOCOUNTS} {NOMEANS} MVA MVA [VARIABLES=] {varlist} {ALL } [/CATEGORICAL=varlist] [/MAXCAT={25 ** }] {n } [/ID=varname] Description: [/NOUNIVARIATE] [/TTEST [PERCENT={5}] [{T }] [{DF } [{PROB }] [{COUNTS }] [{MEANS }]] {n}

More information

Bivariate (Simple) Regression Analysis

Bivariate (Simple) Regression Analysis Revised July 2018 Bivariate (Simple) Regression Analysis This set of notes shows how to use Stata to estimate a simple (two-variable) regression equation. It assumes that you have set Stata up on your

More information

The problem we have now is called variable selection or perhaps model selection. There are several objectives.

The problem we have now is called variable selection or perhaps model selection. There are several objectives. STAT-UB.0103 NOTES for Wednesday 01.APR.04 One of the clues on the library data comes through the VIF values. These VIFs tell you to what extent a predictor is linearly dependent on other predictors. We

More information

Using SPSS for Windows for PSY

Using SPSS for Windows for PSY Using SPSS for Windows for PSY222 2008 SPSS is a statistical package (Statistical Package for the Social Sciences) used in a number of Psychology courses at Macquarie, including PSY 222, PSY 331, PSY 418,

More information

TABEL DISTRIBUSI DAN HUBUNGAN LENGKUNG RAHANG DAN INDEKS FASIAL N MIN MAX MEAN SD

TABEL DISTRIBUSI DAN HUBUNGAN LENGKUNG RAHANG DAN INDEKS FASIAL N MIN MAX MEAN SD TABEL DISTRIBUSI DAN HUBUNGAN LENGKUNG RAHANG DAN INDEKS FASIAL Lengkung Indeks fasial rahang Euryprosopic mesoprosopic leptoprosopic Total Sig. n % n % n % n % 0,000 Narrow 0 0 0 0 15 32,6 15 32,6 Normal

More information

SPM Users Guide. This guide elaborates on powerful ways to combine the TreeNet and GPS engines to achieve model compression and more.

SPM Users Guide. This guide elaborates on powerful ways to combine the TreeNet and GPS engines to achieve model compression and more. SPM Users Guide Model Compression via ISLE and RuleLearner This guide elaborates on powerful ways to combine the TreeNet and GPS engines to achieve model compression and more. Title: Model Compression

More information

Recall the expression for the minimum significant difference (w) used in the Tukey fixed-range method for means separation:

Recall the expression for the minimum significant difference (w) used in the Tukey fixed-range method for means separation: Topic 11. Unbalanced Designs [ST&D section 9.6, page 219; chapter 18] 11.1 Definition of missing data Accidents often result in loss of data. Crops are destroyed in some plots, plants and animals die,

More information

- 1 - Fig. A5.1 Missing value analysis dialog box

- 1 - Fig. A5.1 Missing value analysis dialog box WEB APPENDIX Sarstedt, M. & Mooi, E. (2019). A concise guide to market research. The process, data, and methods using SPSS (3 rd ed.). Heidelberg: Springer. Missing Value Analysis and Multiple Imputation

More information

WELCOME! Lecture 3 Thommy Perlinger

WELCOME! Lecture 3 Thommy Perlinger Quantitative Methods II WELCOME! Lecture 3 Thommy Perlinger Program Lecture 3 Cleaning and transforming data Graphical examination of the data Missing Values Graphical examination of the data It is important

More information

Spatial Patterns Point Pattern Analysis Geographic Patterns in Areal Data

Spatial Patterns Point Pattern Analysis Geographic Patterns in Areal Data Spatial Patterns We will examine methods that are used to analyze patterns in two sorts of spatial data: Point Pattern Analysis - These methods concern themselves with the location information associated

More information

5:2 LAB RESULTS - FOLLOW-UP ANALYSES FOR FACTORIAL

5:2 LAB RESULTS - FOLLOW-UP ANALYSES FOR FACTORIAL 5:2 LAB RESULTS - FOLLOW-UP ANALYSES FOR FACTORIAL T1. n F and n C for main effects = 2 + 2 + 2 = 6 (i.e., 2 observations in each of 3 cells for other factor) Den t = SQRT[3.333x(1/6+1/6)] = 1.054 Den

More information

Minitab 17 commands Prepared by Jeffrey S. Simonoff

Minitab 17 commands Prepared by Jeffrey S. Simonoff Minitab 17 commands Prepared by Jeffrey S. Simonoff Data entry and manipulation To enter data by hand, click on the Worksheet window, and enter the values in as you would in any spreadsheet. To then save

More information

. predict mod1. graph mod1 ed, connect(l) xlabel ylabel l1(model1 predicted income) b1(years of education)

. predict mod1. graph mod1 ed, connect(l) xlabel ylabel l1(model1 predicted income) b1(years of education) DUMMY VARIABLES AND INTERACTIONS Let's start with an example in which we are interested in discrimination in income. We have a dataset that includes information for about 16 people on their income, their

More information

Week 5: Multiple Linear Regression II

Week 5: Multiple Linear Regression II Week 5: Multiple Linear Regression II Marcelo Coca Perraillon University of Colorado Anschutz Medical Campus Health Services Research Methods I HSMP 7607 2017 c 2017 PERRAILLON ARR 1 Outline Adjusted R

More information

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY 23 CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY 3.1 DESIGN OF EXPERIMENTS Design of experiments is a systematic approach for investigation of a system or process. A series

More information

Further Maths Notes. Common Mistakes. Read the bold words in the exam! Always check data entry. Write equations in terms of variables

Further Maths Notes. Common Mistakes. Read the bold words in the exam! Always check data entry. Write equations in terms of variables Further Maths Notes Common Mistakes Read the bold words in the exam! Always check data entry Remember to interpret data with the multipliers specified (e.g. in thousands) Write equations in terms of variables

More information

I. MODEL. Q3i: Check my . Q29s: I like to see films and TV programs from other countries. Q28e: I like to watch TV shows on a laptop/tablet/phone

I. MODEL. Q3i: Check my  . Q29s: I like to see films and TV programs from other countries. Q28e: I like to watch TV shows on a laptop/tablet/phone 1 Multiple Regression-FORCED-ENTRY HIERARCHICAL MODEL DORIS ACHEME COM 631/731, Spring 2017 Data: Film & TV Usage 2015 I. MODEL IV Block 1: Demographics Sex (female dummy):q30 Age: Q31 Income: Q34 Block

More information

610 R12 Prof Colleen F. Moore Analysis of variance for Unbalanced Between Groups designs in R For Psychology 610 University of Wisconsin--Madison

610 R12 Prof Colleen F. Moore Analysis of variance for Unbalanced Between Groups designs in R For Psychology 610 University of Wisconsin--Madison 610 R12 Prof Colleen F. Moore Analysis of variance for Unbalanced Between Groups designs in R For Psychology 610 University of Wisconsin--Madison R is very touchy about unbalanced designs, partly because

More information

An Introduction to Growth Curve Analysis using Structural Equation Modeling

An Introduction to Growth Curve Analysis using Structural Equation Modeling An Introduction to Growth Curve Analysis using Structural Equation Modeling James Jaccard New York University 1 Overview Will introduce the basics of growth curve analysis (GCA) and the fundamental questions

More information

The linear mixed model: modeling hierarchical and longitudinal data

The linear mixed model: modeling hierarchical and longitudinal data The linear mixed model: modeling hierarchical and longitudinal data Analysis of Experimental Data AED The linear mixed model: modeling hierarchical and longitudinal data 1 of 44 Contents 1 Modeling Hierarchical

More information

The Solution to the Factorial Analysis of Variance

The Solution to the Factorial Analysis of Variance The Solution to the Factorial Analysis of Variance As shown in the Excel file, Howell -2, the ANOVA analysis (in the ToolPac) yielded the following table: Anova: Two-Factor With Replication SUMMARYCounting

More information

Beta-Regression with SPSS Michael Smithson School of Psychology, The Australian National University

Beta-Regression with SPSS Michael Smithson School of Psychology, The Australian National University 9/1/2005 Beta-Regression with SPSS 1 Beta-Regression with SPSS Michael Smithson School of Psychology, The Australian National University (email: Michael.Smithson@anu.edu.au) SPSS Nonlinear Regression syntax

More information

Intermediate SAS: Statistics

Intermediate SAS: Statistics Intermediate SAS: Statistics OIT TSS 293-4444 oithelp@mail.wvu.edu oit.wvu.edu/training/classmat/sas/ Table of Contents Procedures... 2 Two-sample t-test:... 2 Paired differences t-test:... 2 Chi Square

More information

STATISTICS FOR PSYCHOLOGISTS

STATISTICS FOR PSYCHOLOGISTS STATISTICS FOR PSYCHOLOGISTS SECTION: JAMOVI CHAPTER: USING THE SOFTWARE Section Abstract: This section provides step-by-step instructions on how to obtain basic statistical output using JAMOVI, both visually

More information

Introduction to Mixed Models: Multivariate Regression

Introduction to Mixed Models: Multivariate Regression Introduction to Mixed Models: Multivariate Regression EPSY 905: Multivariate Analysis Spring 2016 Lecture #9 March 30, 2016 EPSY 905: Multivariate Regression via Path Analysis Today s Lecture Multivariate

More information

Enter your UID and password. Make sure you have popups allowed for this site.

Enter your UID and password. Make sure you have popups allowed for this site. Log onto: https://apps.csbs.utah.edu/ Enter your UID and password. Make sure you have popups allowed for this site. You may need to go to preferences (right most tab) and change your client to Java. I

More information

Product Catalog. AcaStat. Software

Product Catalog. AcaStat. Software Product Catalog AcaStat Software AcaStat AcaStat is an inexpensive and easy-to-use data analysis tool. Easily create data files or import data from spreadsheets or delimited text files. Run crosstabulations,

More information

Minitab Study Card J ENNIFER L EWIS P RIESTLEY, PH.D.

Minitab Study Card J ENNIFER L EWIS P RIESTLEY, PH.D. Minitab Study Card J ENNIFER L EWIS P RIESTLEY, PH.D. Introduction to Minitab The interface for Minitab is very user-friendly, with a spreadsheet orientation. When you first launch Minitab, you will see

More information

Example 1 of panel data : Data for 6 airlines (groups) over 15 years (time periods) Example 1

Example 1 of panel data : Data for 6 airlines (groups) over 15 years (time periods) Example 1 Panel data set Consists of n entities or subjects (e.g., firms and states), each of which includes T observations measured at 1 through t time period. total number of observations : nt Panel data have

More information

Introduction to SPSS. Alan Taylor, Department of Psychology Macquarie University Macquarie University

Introduction to SPSS. Alan Taylor, Department of Psychology Macquarie University Macquarie University Introduction to SPSS Alan Taylor, Department of Psychology Macquarie University 2002-2010 Macquarie University -ii- Note: During the life of v. 17 of SPSS, it underwent a name change, to PASW Statistics

More information

Introduction to SPSS for Windows

Introduction to SPSS for Windows Introduction to SPSS for Windows Alan Taylor, Department of Psychology Macquarie University 2002-2006 Macquarie University ii iii Contents 1. Introduction 1 2. Starting SPSS 1 3. What You See 1 4. Setting

More information

7. Collinearity and Model Selection

7. Collinearity and Model Selection Sociology 740 John Fox Lecture Notes 7. Collinearity and Model Selection Copyright 2014 by John Fox Collinearity and Model Selection 1 1. Introduction I When there is a perfect linear relationship among

More information

( ) = Y ˆ. Calibration Definition A model is calibrated if its predictions are right on average: ave(response Predicted value) = Predicted value.

( ) = Y ˆ. Calibration Definition A model is calibrated if its predictions are right on average: ave(response Predicted value) = Predicted value. Calibration OVERVIEW... 2 INTRODUCTION... 2 CALIBRATION... 3 ANOTHER REASON FOR CALIBRATION... 4 CHECKING THE CALIBRATION OF A REGRESSION... 5 CALIBRATION IN SIMPLE REGRESSION (DISPLAY.JMP)... 5 TESTING

More information

in this course) ˆ Y =time to event, follow-up curtailed: covered under ˆ Missing at random (MAR) a

in this course) ˆ Y =time to event, follow-up curtailed: covered under ˆ Missing at random (MAR) a Chapter 3 Missing Data 3.1 Types of Missing Data ˆ Missing completely at random (MCAR) ˆ Missing at random (MAR) a ˆ Informative missing (non-ignorable non-response) See 1, 38, 59 for an introduction to

More information

Skill 1: Multiplying Polynomials

Skill 1: Multiplying Polynomials CS103 Spring 2018 Mathematical Prerequisites Although CS103 is primarily a math class, this course does not require any higher math as a prerequisite. The most advanced level of mathematics you'll need

More information

DataSet2. <none> <none> <none>

DataSet2. <none> <none> <none> GGraph Notes Output Created 09-Dec-0 07:50:6 Comments Input Active Dataset Filter Weight Split File DataSet Syntax Resources N of Rows in Working Data File Processor Time Elapsed Time 77 GGRAPH /GRAPHDATASET

More information

Chapters 5-6: Statistical Inference Methods

Chapters 5-6: Statistical Inference Methods Chapters 5-6: Statistical Inference Methods Chapter 5: Estimation (of population parameters) Ex. Based on GSS data, we re 95% confident that the population mean of the variable LONELY (no. of days in past

More information

Generalized least squares (GLS) estimates of the level-2 coefficients,

Generalized least squares (GLS) estimates of the level-2 coefficients, Contents 1 Conceptual and Statistical Background for Two-Level Models...7 1.1 The general two-level model... 7 1.1.1 Level-1 model... 8 1.1.2 Level-2 model... 8 1.2 Parameter estimation... 9 1.3 Empirical

More information

E-Campus Inferential Statistics - Part 2

E-Campus Inferential Statistics - Part 2 E-Campus Inferential Statistics - Part 2 Group Members: James Jones Question 4-Isthere a significant difference in the mean prices of the stores? New Textbook Prices New Price Descriptives 95% Confidence

More information

Source df SS MS F A a-1 [A] [T] SS A. / MS S/A S/A (a)(n-1) [AS] [A] SS S/A. / MS BxS/A A x B (a-1)(b-1) [AB] [A] [B] + [T] SS AxB

Source df SS MS F A a-1 [A] [T] SS A. / MS S/A S/A (a)(n-1) [AS] [A] SS S/A. / MS BxS/A A x B (a-1)(b-1) [AB] [A] [B] + [T] SS AxB Keppel, G. Design and Analysis: Chapter 17: The Mixed Two-Factor Within-Subjects Design: The Overall Analysis and the Analysis of Main Effects and Simple Effects Keppel describes an Ax(BxS) design, which

More information

Averages and Variation

Averages and Variation Averages and Variation 3 Copyright Cengage Learning. All rights reserved. 3.1-1 Section 3.1 Measures of Central Tendency: Mode, Median, and Mean Copyright Cengage Learning. All rights reserved. 3.1-2 Focus

More information

THE UNIVERSITY OF BRITISH COLUMBIA FORESTRY 430 and 533. Time: 50 minutes 40 Marks FRST Marks FRST 533 (extra questions)

THE UNIVERSITY OF BRITISH COLUMBIA FORESTRY 430 and 533. Time: 50 minutes 40 Marks FRST Marks FRST 533 (extra questions) THE UNIVERSITY OF BRITISH COLUMBIA FORESTRY 430 and 533 MIDTERM EXAMINATION: October 14, 2005 Instructor: Val LeMay Time: 50 minutes 40 Marks FRST 430 50 Marks FRST 533 (extra questions) This examination

More information

range: [1,20] units: 1 unique values: 20 missing.: 0/20 percentiles: 10% 25% 50% 75% 90%

range: [1,20] units: 1 unique values: 20 missing.: 0/20 percentiles: 10% 25% 50% 75% 90% ------------------ log: \Term 2\Lecture_2s\regression1a.log log type: text opened on: 22 Feb 2008, 03:29:09. cmdlog using " \Term 2\Lecture_2s\regression1a.do" (cmdlog \Term 2\Lecture_2s\regression1a.do

More information

Creating a data file and entering data

Creating a data file and entering data 4 Creating a data file and entering data There are a number of stages in the process of setting up a data file and analysing the data. The flow chart shown on the next page outlines the main steps that

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Rebecca C. Steorts, Duke University STA 325, Chapter 3 ISL 1 / 49 Agenda How to extend beyond a SLR Multiple Linear Regression (MLR) Relationship Between the Response and Predictors

More information

Running Minitab for the first time on your PC

Running Minitab for the first time on your PC Running Minitab for the first time on your PC Screen Appearance When you select the MINITAB option from the MINITAB 14 program group, or click on MINITAB 14 under RAS you will see the following screen.

More information

Regression. Dr. G. Bharadwaja Kumar VIT Chennai

Regression. Dr. G. Bharadwaja Kumar VIT Chennai Regression Dr. G. Bharadwaja Kumar VIT Chennai Introduction Statistical models normally specify how one set of variables, called dependent variables, functionally depend on another set of variables, called

More information

Resources for statistical assistance. Quantitative covariates and regression analysis. Methods for predicting continuous outcomes.

Resources for statistical assistance. Quantitative covariates and regression analysis. Methods for predicting continuous outcomes. Resources for statistical assistance Quantitative covariates and regression analysis Carolyn Taylor Applied Statistics and Data Science Group (ASDa) Department of Statistics, UBC January 24, 2017 Department

More information

Predict Outcomes and Reveal Relationships in Categorical Data

Predict Outcomes and Reveal Relationships in Categorical Data PASW Categories 18 Specifications Predict Outcomes and Reveal Relationships in Categorical Data Unleash the full potential of your data through predictive analysis, statistical learning, perceptual mapping,

More information

For our example, we will look at the following factors and factor levels.

For our example, we will look at the following factors and factor levels. In order to review the calculations that are used to generate the Analysis of Variance, we will use the statapult example. By adjusting various settings on the statapult, you are able to throw the ball

More information

8. MINITAB COMMANDS WEEK-BY-WEEK

8. MINITAB COMMANDS WEEK-BY-WEEK 8. MINITAB COMMANDS WEEK-BY-WEEK In this section of the Study Guide, we give brief information about the Minitab commands that are needed to apply the statistical methods in each week s study. They are

More information

1. Basic Steps for Data Analysis Data Editor. 2.4.To create a new SPSS file

1. Basic Steps for Data Analysis Data Editor. 2.4.To create a new SPSS file 1 SPSS Guide 2009 Content 1. Basic Steps for Data Analysis. 3 2. Data Editor. 2.4.To create a new SPSS file 3 4 3. Data Analysis/ Frequencies. 5 4. Recoding the variable into classes.. 5 5. Data Analysis/

More information

Regression Analysis and Linear Regression Models

Regression Analysis and Linear Regression Models Regression Analysis and Linear Regression Models University of Trento - FBK 2 March, 2015 (UNITN-FBK) Regression Analysis and Linear Regression Models 2 March, 2015 1 / 33 Relationship between numerical

More information

STAT 2607 REVIEW PROBLEMS Word problems must be answered in words of the problem.

STAT 2607 REVIEW PROBLEMS Word problems must be answered in words of the problem. STAT 2607 REVIEW PROBLEMS 1 REMINDER: On the final exam 1. Word problems must be answered in words of the problem. 2. "Test" means that you must carry out a formal hypothesis testing procedure with H0,

More information

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242 Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242 Creation & Description of a Data Set * 4 Levels of Measurement * Nominal, ordinal, interval, ratio * Variable Types

More information

Regression. Page 1. Notes. Output Created Comments Data. 26-Mar :31:18. Input. C:\Documents and Settings\BuroK\Desktop\Data Sets\Prestige.

Regression. Page 1. Notes. Output Created Comments Data. 26-Mar :31:18. Input. C:\Documents and Settings\BuroK\Desktop\Data Sets\Prestige. GET FILE='C:\Documents and Settings\BuroK\Desktop\DataSets\Prestige.sav'. GET FILE='E:\MacEwan\Teaching\Stat252\Data\SPSS_data\MENTALID.sav'. DATASET ACTIVATE DataSet1. DATASET CLOSE DataSet2. GET FILE='E:\MacEwan\Teaching\Stat252\Data\SPSS_data\survey_part.sav'.

More information

JMP Book Descriptions

JMP Book Descriptions JMP Book Descriptions The collection of JMP documentation is available in the JMP Help > Books menu. This document describes each title to help you decide which book to explore. Each book title is linked

More information

PSY 9556B (Jan8) Design Issues and Missing Data Continued Examples of Simulations for Projects

PSY 9556B (Jan8) Design Issues and Missing Data Continued Examples of Simulations for Projects PSY 9556B (Jan8) Design Issues and Missing Data Continued Examples of Simulations for Projects Let s create a data for a variable measured repeatedly over five occasions We could create raw data (for each

More information

Frequently Asked Questions Updated 2006 (TRIM version 3.51) PREPARING DATA & RUNNING TRIM

Frequently Asked Questions Updated 2006 (TRIM version 3.51) PREPARING DATA & RUNNING TRIM Frequently Asked Questions Updated 2006 (TRIM version 3.51) PREPARING DATA & RUNNING TRIM * Which directories are used for input files and output files? See menu-item "Options" and page 22 in the manual.

More information

THIS IS NOT REPRESNTATIVE OF CURRENT CLASS MATERIAL. STOR 455 Midterm 1 September 28, 2010

THIS IS NOT REPRESNTATIVE OF CURRENT CLASS MATERIAL. STOR 455 Midterm 1 September 28, 2010 THIS IS NOT REPRESNTATIVE OF CURRENT CLASS MATERIAL STOR 455 Midterm September 8, INSTRUCTIONS: BOTH THE EXAM AND THE BUBBLE SHEET WILL BE COLLECTED. YOU MUST PRINT YOUR NAME AND SIGN THE HONOR PLEDGE

More information

STATS PAD USER MANUAL

STATS PAD USER MANUAL STATS PAD USER MANUAL For Version 2.0 Manual Version 2.0 1 Table of Contents Basic Navigation! 3 Settings! 7 Entering Data! 7 Sharing Data! 8 Managing Files! 10 Running Tests! 11 Interpreting Output! 11

More information

In our first lecture on sets and set theory, we introduced a bunch of new symbols and terminology.

In our first lecture on sets and set theory, we introduced a bunch of new symbols and terminology. Guide to and Hi everybody! In our first lecture on sets and set theory, we introduced a bunch of new symbols and terminology. This guide focuses on two of those symbols: and. These symbols represent concepts

More information

Descriptives. Graph. [DataSet1] C:\Documents and Settings\BuroK\Desktop\Prestige.sav

Descriptives. Graph. [DataSet1] C:\Documents and Settings\BuroK\Desktop\Prestige.sav GET FILE='C:\Documents and Settings\BuroK\Desktop\Prestige.sav'. DESCRIPTIVES VARIABLES=prestige education income women /STATISTICS=MEAN STDDEV MIN MAX. Descriptives Input Missing Value Handling Resources

More information

Chapter One: Getting Started With IBM SPSS for Windows

Chapter One: Getting Started With IBM SPSS for Windows Chapter One: Getting Started With IBM SPSS for Windows Using Windows The Windows start-up screen should look something like Figure 1-1. Several standard desktop icons will always appear on start up. Note

More information

Statistical Analysis Using SPSS for Windows Getting Started (Ver. 2018/10/30) The numbers of figures in the SPSS_screenshot.pptx are shown in red.

Statistical Analysis Using SPSS for Windows Getting Started (Ver. 2018/10/30) The numbers of figures in the SPSS_screenshot.pptx are shown in red. Statistical Analysis Using SPSS for Windows Getting Started (Ver. 2018/10/30) The numbers of figures in the SPSS_screenshot.pptx are shown in red. 1. How to display English messages from IBM SPSS Statistics

More information

Clustering and Visualisation of Data

Clustering and Visualisation of Data Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Features and Feature Selection Hamid R. Rabiee Jafar Muhammadi Spring 2012 http://ce.sharif.edu/courses/90-91/2/ce725-1/ Agenda Features and Patterns The Curse of Size and

More information

Multicollinearity and Validation CIVL 7012/8012

Multicollinearity and Validation CIVL 7012/8012 Multicollinearity and Validation CIVL 7012/8012 2 In Today s Class Recap Multicollinearity Model Validation MULTICOLLINEARITY 1. Perfect Multicollinearity 2. Consequences of Perfect Multicollinearity 3.

More information

IQC monitoring in laboratory networks

IQC monitoring in laboratory networks IQC for Networked Analysers Background and instructions for use IQC monitoring in laboratory networks Modern Laboratories continue to produce large quantities of internal quality control data (IQC) despite

More information

Section E. Measuring the Strength of A Linear Association

Section E. Measuring the Strength of A Linear Association This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information