Source df SS MS F A a-1 [A] [T] SS A. / MS S/A S/A (a)(n-1) [AS] [A] SS S/A. / MS BxS/A A x B (a-1)(b-1) [AB] [A] [B] + [T] SS AxB

Similar documents
Laboratory for Two-Way ANOVA: Interactions

General Factorial Models

The Solution to the Factorial Analysis of Variance

Stat 5303 (Oehlert): Unreplicated 2-Series Factorials 1

General Factorial Models

An introduction to SPSS

Recall the expression for the minimum significant difference (w) used in the Tukey fixed-range method for means separation:

Week 6, Week 7 and Week 8 Analyses of Variance

Table of Laplace Transforms

STATISTICS FOR PSYCHOLOGISTS

SAS data statements and data: /*Factor A: angle Factor B: geometry Factor C: speed*/

Chemical Reaction dataset ( )

5 R1 The one green in the same place so either of these could be green.

Enter your UID and password. Make sure you have popups allowed for this site.

NCSS Statistical Software. Design Generator

Product Catalog. AcaStat. Software

One way ANOVA when the data are not normally distributed (The Kruskal-Wallis test).

Example 5.25: (page 228) Screenshots from JMP. These examples assume post-hoc analysis using a Protected LSD or Protected Welch strategy.

The mathematics you have learned about so

Divisibility Rules and Their Explanations

Lab #9: ANOVA and TUKEY tests

Programming and Data Structure

Euler s Method for Approximating Solution Curves

Week - 04 Lecture - 01 Merge Sort. (Refer Slide Time: 00:02)

For our example, we will look at the following factors and factor levels.

Paul's Online Math Notes. Online Notes / Algebra (Notes) / Systems of Equations / Augmented Matricies

Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi

Stat 5303 (Oehlert): Unbalanced Factorial Examples 1

x 2 + 3, r 4(x) = x2 1

Workshop. Import Workshop

Analysis of Two-Level Designs

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #43. Multidimensional Arrays

6:1 LAB RESULTS -WITHIN-S ANOVA

Section 4 General Factorial Tutorials

Statistics Lab #7 ANOVA Part 2 & ANCOVA

2011 NAICC ARM 8 Introduction Training, Jan. 2011

Spatial Patterns Point Pattern Analysis Geographic Patterns in Areal Data

COMP 161 Lecture Notes 16 Analyzing Search and Sort

610 R12 Prof Colleen F. Moore Analysis of variance for Unbalanced Between Groups designs in R For Psychology 610 University of Wisconsin--Madison

Split-Plot General Multilevel-Categoric Factorial Tutorial

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6

Lecture Notes #4: Randomized Block, Latin Square, and Factorials4-1

MA 1128: Lecture 02 1/22/2018

Set up of the data is similar to the Randomized Block Design situation. A. Chang 1. 1) Setting up the data sheet

(Refer Slide Time: 01.26)

A Quick Introduction to R

RSM Split-Plot Designs & Diagnostics Solve Real-World Problems

1 Introduction to Using Excel Spreadsheets

(Refer Slide Time 04:53)

Problem set for Week 7 Linear models: Linear regression, multiple linear regression, ANOVA, ANCOVA

E-Campus Inferential Statistics - Part 2

Fractional. Design of Experiments. Overview. Scenario

R-Square Coeff Var Root MSE y Mean

For Additional Information...

To complete the computer assignments, you ll use the EViews software installed on the lab PCs in WMC 2502 and WMC 2506.

ANSWERS -- Prep for Psyc350 Laboratory Final Statistics Part Prep a

SPSS INSTRUCTION CHAPTER 9

DesignDirector Version 1.0(E)

Tips and Guidance for Analyzing Data. Executive Summary

MITOCW ocw f99-lec07_300k

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

6.001 Notes: Section 8.1

(Refer Slide Time 3:31)

Introduction to Statistical Analyses in SAS

Robust Methods of Performing One Way RM ANOVA in R

Lab 5 - Risk Analysis, Robustness, and Power

UAccess ANALYTICS Next Steps: Working with Bins, Groups, and Calculated Items: Combining Data Your Way

Subset Selection in Multiple Regression

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 24: Online Algorithms

Meet MINITAB. Student Release 14. for Windows

2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008

Usability Test Report: Bento results interface 1

Fly wing length data Sokal and Rohlf Box 10.1 Ch13.xls. on chalk board

Data Analysis and Solver Plugins for KSpread USER S MANUAL. Tomasz Maliszewski

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview

Data Management - 50%

Basic Reliable Transport Protocols

Modular Arithmetic. is just the set of remainders we can get when we divide integers by n

Survey of Math: Excel Spreadsheet Guide (for Excel 2016) Page 1 of 9

Statistical Bioinformatics (Biomedical Big Data) Notes 2: Installing and Using R

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #44. Multidimensional Array and pointers

One Factor Experiments

Filter and PivotTables in Excel

Introduction to Cryptology Dr. Sugata Gangopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Roorkee

Intro. Scheme Basics. scm> 5 5. scm>

How to Make APA Format Tables Using Microsoft Word

Getting Started with Minitab 17

MITOCW ocw f99-lec12_300k

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras

TUTORIAL FOR IMPORTING OTTAWA FIRE HYDRANT PARKING VIOLATION DATA INTO MYSQL

Multi-Factored Experiments

2D/3D Geometric Transformations and Scene Graphs

Solution Guide for Chapter 21

Introduction to Mixed Models: Multivariate Regression

FURTHER ORTHOGONAL ARRAYS

6.001 Notes: Section 6.1

Chapter 1. Math review. 1.1 Some sets

Nonparametric Testing

Detailed instructions for video analysis using Logger Pro.

MITOCW watch?v=yarwp7tntl4

Transcription:

Keppel, G. Design and Analysis: Chapter 17: The Mixed Two-Factor Within-Subjects Design: The Overall Analysis and the Analysis of Main Effects and Simple Effects Keppel describes an Ax(BxS) design, which involves one between (independent groups) factor (A) and one within (repeated measures) factor (B). He refers to this as a mixed within-subjects factorial design. Some people refer to a mixed design as a combination of a fixed-effects factor with a random-effects factor, so you should be aware of that fact (and the difference between the two usages of mixed ). 17.1 The Overall Analysis of Variance The A x (B x S) design is (sort of) like combining a single-factor independent groups design with a single-factor repeated measures design (and gaining the benefit of an interaction term). Keppel shows you all sorts of matrices, which are really useful if you re going to compute the analyses by hand. However, given that you are likely to use the computer to analyze these complex designs, I ll shortcut the summary of the discussion to focus on the components crucial to computing the mixed design with the various statistical packages (and understanding the output). Keppel also shows you how to generate the df and basic ratios for hand computation of these ANOVAs. Once again, I refer you to the chapter (pp. 369-371) for your edification. What is important, I think, is for you to have a sense of the df and error terms that would be used in this particular two-factor design. As seen in Table 17-3: Source df SS MS F A a-1 [A] [T] SS A / df A MS A / MS S/A S/A (a)(n-1) [AS] [A] SS S/A / df S/A B b-1 [B] [T] SS B / df B MS B / MS BxS/A A x B (a-1)(b-1) [AB] [A] [B] + [T] SS AxB / df AxB MS AxB / MS BxS/A B x S/A (a)(b-1)(n-1) [Y] [AB] [AS] + [A] SS BxS/A / df BxS/A Total (a)(b)(n) 1 [Y] [T] Note that the error term used for the between factor (A) is the usual suspect (S/A). However, the way that SS S/A is computed will differ slightly from the way one computes it in a single-factor between design. That is, a participant would provide only one score within each level of A. For this mixed design, however, a participant would provide as many scores within each level of A as there are levels of B. For example, imagine a single-factor independent groups experiment with 3 levels (left) and a mixed 3x4 design with the first factor (A) between groups (right). The data would appear as seen below: a 1 a 2 a 3 p 1 3 p 4 2 p 7 7 p 2 4 p 5 9 p 8 3 p 3 5 p 6 5 p 9 8 a 1 a 2 a 3 p 1 4,5,7,3 p 4 7,3,6,4 p 7 8,4,3,4 p 2 4,6,8,4 p 5 3,8,4,6 p 8 4,8,9,2 p 3 7,3,8,2 p 6 6,4,5,6 p 9 3,4,8,6 The trick to translating between the two sets of data is to add together the 4 scores for each participant in the mixed design. Once that is done, the analysis between the two designs is equivalent. Note, however, that using the sum means that the specific order of the scores from the levels of B is irrelevant. Note, also, that you need to keep in mind that the sum came from four scores. (We ll return to that point below.) 17-1

Keppel s numerical example consists of a fictitious experiment in which participants are given a digit-cancellation task under three conditions of motivation. The participants in a 1 are simply asked to do their best. Participants in a 2, on the other hand, are given highly motivating instructions, and participants in a 3 are offered $5 if they perform at some predetermined level. The actual task is to cross out certain digits from a long series of digits in a test booklet, with the dv = the number of digits correctly cancelled within a given time period. Keppel shows the calculations, as well as the source table for this ANOVA. What follows here are the procedures for computing the ANOVA using StatView and SuperANOVA. Computing the Overall ANOVA using StatView First, you would enter the data as seen below. Note, once again, that you put all the information from an individual participant on a single row. Next, you would compact the repeated factor (Trials). Then, choose ANOVA-repeated measures from the Analyze menu. Within the next window, drag Motivation to the Between Factor window and Trials to the Repeated Measurements window, then click on OK. You ll see a new window containing the source table seen below: ANOVA Table for Trial Motivation Subject(Group) Category for Trial Category for Trial * Motivation Category for Trial * Subject(Group) Lambda Power 2 414.125 207.063 6.168.0206 12.336.759 9 302.125 33.569 3 1152.417 384.139 114.763 <.0001 344.290 1.000 6 130.208 21.701 6.483.0003 38.900.996 27 90.375 3.347 Lots of other output is generated, including means tables. However, the default ANOVA output that StatView produces includes histograms, which I wouldn t particularly want. To have more control over the nature of the output generated, instead of choosing ANOVA-repeated measures, choose New View from the Analyze menu, then select the particular output that you want from the ANOVA menu on the left of the window. In any event, StatView is certainly generating the same source table that Keppel produced, so everything is hunky-dory so far. As was the case for both one-way and two-way independent groups ANOVAs, the MS Error for the Motivation (between groups) factor is the pooled variance of the three motivation groups. For example, for the Do Best group, SS S/A would be: 17-2

SS S / A1 = (64)2 + (50) 2 + (71) 2 + (45) 2 4 - (230)2 (4)(4) =109.25 Each of the values in the numerator represents the sum of all four B scores (e.g., 64 = 13+14+17+20), which is why the denominator is 4 (number of scores added before squaring). Then, MS S/A would be 109.25 / (4-1) = 36.42 The MS S/A for the High Motivation Instructions would be 45.73 and for the Paid $5 group it would be 18.56. The mean of these three variances would be 33.57. The repeated-measures error term (MS BxS/A ) is an average of the separate BxS interactions at the different levels of factor A. In essence, then, you are computing three separate one-way repeated measures ANOVAs (one for each level of Motivation), and then taking the mean of the three error terms (3.31, 3.06, and 3.67, for Do Best, High Instruct, and Paid $5 respectively). Computing the Overall ANOVA using SuperANOVA The data would be entered and compacted just as in StatView. Next, from the Canned Models, choose Repeated-1 within 1 between. The source table would appear as seen below. Note that the SuperANOVA output contains the p-values adjusted for heterogeneity of variance (G-G). Because the Motivation factor is between groups, no correction is done for that factor. You could assess the likelihood of violation of the homogeneity of variance assumption for the factor by using the Brown-Forsythe procedure. Source df Sum of Squares Mean Square F-Value P-Value G-G H-F Motivation 2 414.125 207.063 6.168.0206 Subject(Group) 9 302.125 33.569 Trials 3 1152.417 384.139 114.763.0001.0001.0001 Trials * Motivat 6 130.208 21.701 6.483.0003.0007.0003 Trials * Subject 27 90.375 3.347 Dependent: Trials To generate the means tables or graphs, simply click on the particular factor of interest and then Means Table, etc. For instance, SuperANOVA would generate the interaction graph seen below: 17-3

It s not as good a graph as Cricket Graph would generate, but it s fine for taking a preliminary look at the data. Keppel points out the composite nature of the error terms. That is, the error term for the between factor is computed by averaging the variances for each of the three groups separately. The data that you use for each group, however, requires that you first sum across all the scores for a participant. That is, when computing the variance for the first group (no instructions), you would use 64, 50, 71, and 45 as the data. The error term for the repeated factor is actually the average variances for each of the B x S/A interactions, as Keppel illustrates on pp. 376-377. 17.2 Statistical Model and Assumptions The sphericity assumption (essentially homogeneity of variance) is important to assess, but the simplest way is to use the Geisser-Greenhouse correction that is generated by SuperANOVA. Note that for less severe violations of the assumption, the G-G correction is an overcorrection. It s also possible to use the Brown-Forsythe procedure on the between factor, though the study that Keppel cites showed a greater problem for violations of sphericity for the repeated factor than for violations of homogeneity of variance for the between factor. Missing data are always a problem in repeated measures designs. Thus, Keppel is assuming no missing data. Note that the various programs will delete all the data for a participant is one piece is missing from the analysis. You can remove practice effects from the repeated factor when you want to do so. However, it s a bit tricky. Keppel showed how to do so in an earlier edition of the textbook (but left it out of this edition). Essentially, you can re-compute the ANOVA using the position effect as the repeated factor. That is, instead of AxB, you would use AxP. Then, you would remove the SS P (position) and the SS AxP (interaction of position and the between factor) from the SS BxS/A (the original error term). Of course, you d have to do the same thing for the degrees of freedom. For the data set that we ve been analyzing, the repeated factor was trial, so it was not counterbalanced. Thus, you would not want to remove the effects of position, because that s what you re actually studying. Suppose, however, that you were dealing with a 2x4 mixed design with the first factor between and the second factor within. The data would look like this: a 1 a 2 b 1 b 2 b 3 b 4 b 1 b 2 b 3 b 4 s1 3 4 7 3 s5 5 6 11 7 s2 6 8 12 9 s6 10 12 18 15 s3 7 13 11 11 s7 10 15 15 14 s4 0 3 6 6 s8 5 7 11 9 The source table for these data (in SuperANOVA) would be: Source df Sum of Squares Mean Square F-Value P-Value G-G H-F A 1 116.281 116.281 2.506.1645 Subject(Group) 6 278.438 46.406 B 3 129.594 43.198 22.336.0001.0001.0001 B * A 3 3.344 1.115.576.6380.5693.6380 B * Subject(Gro 18 34.813 1.934 Dependent: B 17-4

Now, let s assume that four orders were used: b1, b2, b3, b4; b3, b1, b4, b2; b2, b4, b1, b3; and b4, b3, b2, b1 for the four participants in each of the two levels of A. All I would need to do is to reorder the data by position and then re-compute the ANOVA. a 1 a 2 p 1 p 2 p 3 p 4 p 1 p 2 p 3 p 4 s1 3 4 7 3 s5 5 6 11 7 s2 12 6 9 8 s6 18 10 15 12 s3 13 11 7 11 s7 15 14 10 15 s4 6 6 3 0 s8 9 11 7 5 The ANOVA on the position data would be: Source df Sum of Squares Mean Square F-Value P-Value G-G H-F A 1 116.281 116.281 2.506.1645 Subject(Group) 6 278.438 46.406 P 3 25.844 8.615 1.105.3728.3703.3728 P * A 3 1.594.531.068.9762.9663.9762 P * Subject(Gro 18 140.313 7.795 Dependent: Position You would then remove the position effects from the original ANOVA as seen below, yielding two new F s for B and AxB (as seen below). However, you ll no longer have the G-G correction computed for you automatically. Note that the between effects are not changed at all. However, the error term for the repeated components goes down from 1.934 to.615, which will yield much higher F s. Source df SS MS F A 1 116.281 116.281 2.506 S/A (Error) 6 278.438 46.406 B 3 129.594 43.198 70.24 BxA 3 3.344 1.115 1.81 Old Error 18 34.813 1.934 P 3 25.844 PxA 3 1.594 Resid Err 12 7.375.615 17.3 Analyzing Main Effects Okay, as you saw in the overall analyses for the K374 data, the two main effects were both significant. In the absence of a significant interaction, these would be the major focus of attention. In this particular example, the interaction was also significant, so we ll have to pretend that it isn t significant. How do you determine which of the group means contributing to the main effect are significantly different? First, let s look at the between factor (Motivation). Keppel illustrates how we might compare the highly motivated group (a 2 ) with the group paid $5 (a 3 ). Note that you would first compute the SS Comparison, which is also the MS Comparison because df = 1. Then you would divide the MS Comparison by the MS S/A from the overall ANOVA, which yields an F Comparison = 3.57. How would we generate such comparisons using the stats packages? 17-5

Comparisons to Assess the Main Effect for the Between Factor Using StatView In order to compute the same comparison that Keppel illustrates, you would want to Exclude the data from participants in the first condition. After having done so, you would continue with the analysis exactly as you had done previously for the overall ANOVA. The source table would be: ANOVA Table for Trials Row exclusion: K374.data.SV Motivation Subject(Group) Category for Trials Category for Trials * Moti Category for Trials * Subje 1 120.125 120.125 3.737.1014 6 192.875 32.146 3 1132.750 377.583 112.107 <.0001 3 23.125 7.708 2.289.1131 18 60.625 3.368 Note that you would only use the MS for the comparison and jettison the rest. So, ignore the bit of rounding error and divide that MS by the MS Error from the overall ANOVA (33.569) to obtain your F Comparison = 3.58. As a post hoc comparison, you d generate an F Critical using Tukey s procedure to assess the significance of the F Comparison. Comparisons to Assess the Main Effect for the Between Factor Using SuperANOVA In order to compute the comparison, you would first have to Exclude the people in the first group. Then, compute the ANOVA exactly as you did previously for the overall ANOVA. You ll see a source table like the one below: Source df Sum of Squares Mean Square F-Value P-Value G-G H-F Motivation 1 120.125 120.125 3.737.1014 Subject(Group) 6 192.875 32.146 Trials 3 1132.750 377.583 112.107.0001.0001.0001 Trials * Motivat 3 23.125 7.708 2.289.1131.1356.1131 Trials * Subject 18 60.625 3.368 Dependent: Trials What you d now do is to take the MS generated for the Motivation factor (note that df = 1), and divide that by the MS Error (MS S/A ) from the overall ANOVA (which was 33.569). Thus, you ll get F = 3.58. As a post hoc comparison, you d generate an F Critical using Tukey s procedure to assess the significance of the F Comparison. If a comparison involves a repeated factor, the error term typically involves only the conditions being compared (rather than the overall error term). Thus, the F generated by a computer analysis is usually all you need for the comparison. Although the four trials involved in the example from K374 would often be analyzed using trend analysis, Keppel shows how you could compare just the first and last trials to see if a difference in cancellation emerges. The F obtained for this comparison is 216.90. As a post-hoc comparison, you should choose to compare this F to a critical F using Tukey s procedure. 17-6

Note that if we were computing complex comparisons rather than a simple comparison of two group means, we would need to adjust the SS as illustrated in Ch. 16 (pp. 359-360). Comparisons to Assess the Main Effect for the Repeated Factor Using StatView Suppose that we want to compare levels 1 and 4 of Factor B (as Keppel does on p. 381). The first step would be to Expand the compacted variable. Then, we need to get levels 1 and 4 next to each other. I d use copy/paste to move them to contiguous columns, then I would Compact the two columns. Finally, I would compute the repeated measures ANOVA on the compacted variable (but also including the between factor), yielding the source table seen below: ANOVA Table for Factor B Comparison A Subject(Group) Category for Factor B Comparison Category for Factor B Comparison * A Category for Factor B Comparison * Subj 2 121.083 60.542 3.440.0777 9 158.375 17.597 1 950.042 950.042 217.152 <.0001 2 85.083 42.542 9.724.0056 9 39.375 4.375 In this case, we would be able to directly use the F Comparison = 217.152. That is, the error term is appropriate. As a post hoc comparison, we would then compare this F to a critical value of F using Tukey s procedure. Comparisons to Assess the Main Effect for the Repeated Factor Using SuperANOVA Again, let s do the comparison that Keppel worked out in the text. A shortcut to get the two columns next to one another is to use the Formula window and simply set the new column to the appropriate old column. In this case, I used the formula window to make the last column equal to Trial 1, then used the formula window again to make the (new) last column equal to Trial 2. Then, I compacted those two columns and ran the 1-between 1-within ANOVA. The source table is as seen below: Source df Sum of Squares Mean Square F-Value P-Value G-G H-F Motivation 2 121.083 60.542 3.440.0777 Subject(Group) 9 158.375 17.597 trials.comp 1 950.042 950.042 217.152.0001.0001.0001 trials.comp * M 2 85.083 42.542 9.724.0056.0056.0056 trials.comp * Su 9 39.375 4.375 Dependent: Trial.comp Aside from the rounding error differences, the source table is the same as Keppel s. The F for this comparison (217.152) would then be compared to the critical F, which as a post hoc comparison would often lead you to use Tukey s HSD. 17-7

17.4 Analyzing Simple Effects: An Overview The analysis of simple effects involves examining the data in the AB matrix row by row or column by column and looking for significance attributable to the variation of one of the independent variables while the other factor is held constant. Deciding on the appropriate error term for analyzing simple effects is not simple. One approach is to use the pooled error terms found in the original analysis, which would essentially assume that all the BxS interactions were homogeneous. As Keppel points out, the separate error terms make more sense in the mixed factorial design. 17.5 Simple Effects Involving the Repeated Factor As Keppel indicates, this is simply turning the analysis into a one-way repeated measures ANOVA. He provides an example of looking at the simple effects of Trials for the people who were given a $5 incentive. For this simple effect, the F is 61.32 (see Table 17-10). I would argue that as a post-hoc comparison, this F should be tested against a critical F using Tukey s procedure. Knowing that the simple effect is significant would then lead you to work to determine which of the specific means differed. Keppel compares the first and fourth trials (for just the group getting $5) and obtains an F of 155.57 (see Table 17-11). Again, I would test against Tukey s F Critical. Now, let s get each of the stats packages to compute the simple effects and the following single-df comparisons. Simple Effects Involving the Repeated Factor in StatView The first step is to Exclude the rows for participants in the first two groups. Thus, the analysis will be a one-way repeated measures ANOVA on the $5 incentive group. The Trial factor was still compacted for me, but you may have to re-compact the variable. In any event, you ll choose the repeated measures ANOVA (with no between factor) and drag the compacted variable into the little window for the repeated factor. The source table produced is seen below, but I m leaving out the means table, etc. ANOVA Table for Trials Row exclusion: k374.data.sv Subject Category for Trials Category for Trials * Subje 3 55.688 18.562 3 675.188 225.062 61.265 <.0001 9 33.062 3.674 Reliability Estimates - All Treatments: ; Single Treatment: Now, given that the simple effect is significant, I ll need to compute the single-df comparisons to see which of the 4 means actually differ. To mimic Keppel s analysis, I ll just compare the first and fourth trials here. That involves leaving out the first two between groups (still), but now I need to move the first and fourth groups next to each other so that I can compact this reduced variable. After doing so, I get the output seen below: 17-8

ANOVA Table for Trials Comparison Row exclusion: k374.data.sv Subject Category for Factor B Comp Category for Factor B Comp 3 34.500 11.500 1 544.500 544.500 155.571.0011 3 10.500 3.500 Reliability Estimates - All Treatments: ; Single Treatment: I would need to conduct analyses for the other conditions, but this provides you with the necessary template for how to approach these single-df comparisons. Simple Effects Involving the Repeated Factor in SuperANOVA The procedure is virtually identical to that for StatView. For the simple effects, you would first need to exclude the rows for the motivation conditions that you want to leave out. Then, you d simply conduct the one-way repeated measures ANOVA on the remaining scores, a source table that is virtually identical to the one from StatView. The same would be true of the single-df comparisons. 17.6 Simple Effects Involving the Nonrepeated Factor Keppel illustrates how you might conduct a simple effects analysis of motivation on the fourth trial (Table 17-12). He obtains an F of 13.21, which I would test against a critical F using Tukey s HSD. When a significant simple effect is found, the next step would be to compute the necessary single-df comparisons to determine which specific means differed. Keppel computes a complex comparison involving the group told to do its best vs. the groups given highly motivating instructions and a $5 incentive. For this comparison, he finds an F of 22.25. Simple Effects Involving the Nonrepeated Factor using StatView If your repeated factor is still compacted, you ll need to expand it. Then, all that you re doing (really) is a one-way between groups ANOVA. Thus, from the Analyze menu, you d choose the ANOVA-factorial. Then, drag the Motivation factor into the Between window and the Trial 4 column into the DV window. You ll get the identical source table that Keppel shows in Table 17-12, as seen below: ANOVA Table for Trial 4 Motivation 2 202.667 101.333 13.217.0021 Residual 9 69.000 7.667 Model II estimate of between component variance: 23.417 Next, suppose that you want to compute the complex comparison that Keppel illustrates. What you ll need to do is to change the group numbers (in the Motivation column) for the second and third groups to make them identical. Then, you d simply compute the same analysis that you computed earlier. You ll get the source table seen below, which is not quite what you want. 17-9

ANOVA Table for Trial 4 Motivation 1 170.667 170.667 16.898.0021 Residual 10 101.000 10.100 Model II estimate of between component variance: 30.106 Note that the MS Comparison is what you want, but not the F because the error term is wrong. It s based on the one group with 4 people and the other group with 8 people not what we want. Instead, you d take the MS Error from the simple effects comparison to get the proper F-ratio of 22.25. Simple Effects Involving the Nonrepeated Factor using SuperANOVA Again, this would be a 1-Factor ANOVA. Thus, you might need to Expand your compacted variable, because you re only going to use the one column (for Trial 4). The source table produced would be virtually identical to the one produced by StatView: Source df Sum of Squares Mean Square F-Value P-Value Motivation 2 202.667 101.333 13.217.0021 Residual 9 69.000 7.667 Dependent: Trial 4 For the complex comparison, the procedure would also be much like that for StatView. You d have to relabel some of the Motivation levels to make them identical, then re-compute the analysis you d done for the simple effects (because now you d only have 2 levels). Again, as seen in the source table below, you ve got the correct MS Comparison, but you need the MS Error from the simple effects analysis. Source df Sum of Squares Mean Square F-Value P-Value Motivation 1 170.667 170.667 16.898.0021 Residual 10 101.000 10.100 Dependent: Trial 4 17-10