When using a repeated measures or matched design, the IV can have 3 or more levels .The same
issues of experimental design as examined previously (i.e., for a repeated measures/matched IV with
2 levels) apply
Note that if you calculated correlation coefficients, the scores across the repeated experimental
conditions would be correlated This is because the same participants are in every condition – the
scores are not independent!
the f ratio
MSbetween/ MSerror
MS bettwen = SSbetween conditions/ DF between conditions
MSerror = SSerror/ DFerror
violations to normaility
ANOVA is robust to minor to moderate violations . If non-normality found in a minority of conditions
and not of high magnitude can continue to use the ANOVA . Note it as a limitation
May modify data to improve normality E.g., transformations, Winsorisation . If outliers cause the
non-normality, may exclude outliers . Use a non-parametric test (not discussed further in this
course!)
effect size for repeated measures- Partial eta
squared
S=.01 M= .09. L= .25
SSbetwen subjects are partioned out
Null hypothesis The independent variable has no effect on the dependent variable τj will not
contribute anything towards the individual’s score The condition means will be the same. mu1=
mu2=m3..Any numerical differences between the means is due to sampling error
Alternative hypothesis The independent variable has an effect on the dependent variable τj will
influence the individual’s score The conditions means are not equal This statement is false: mu 1 =
mu2 = mu 3 =. The observed difference between the means is not due to sampling error.
multiple comparisons
can do hand t test (need to look up bonferoni back of book)
We’ll use pairwise Bonferroni comparisons via the “Compare main effects” option in SPSS Output
table gives the mean difference and significance associated with the difference for all possible
pairwise comparisons
to get T in pair wise = mean diffrences divided by std error ( DF n-1)
INFERNECE BY EYE-Different rules will apply than for independent groups For repeated measures, need the confidence
interval (or standard error) of the mean differences between the conditions you want to compare If
the null hypothesis is true, the mean difference = 0 If the CI of the difference scores does not
include zero, conclude a significant difference at that level of confidence (e.g., 95% interval gives p <
.05)
when you swe 'tests of within subject effects' - tells us we have a least one repeared measures variable in the analysis
Using SPSS to obtain a repeated measures oneway ANOVA 1. Click on Analyze General Linear Model
Repeated Measures 2. In the dialogue box, select the name of the independent variable in the
Within-Subject Factor Name: box. Enter the Number of Levels: in the second box. Click on the [Add]
button. Next, click on the [Define] button. 3. In the next dialogue box, move the levels of the repeated
measures independent variable in the Within-Subjects Variables box. 4. Click on the [Options] button
to obtain Estimates of effect size, Display Means, and Compare main effects. 5. Finish running the
Univariate procedure. will need to speciy names ie conditions and levels
structural model for repeated measures One factor
Xij = μ + πj + τi + εij
μ = grand mean πj = variability
associated with the jth person
(measuring how much they
differ from the average person)
τi = the effect of the condition
(IV) εij = error variability
The structural model isolates the variability associated with the individual participant (πj) This can
be removed from the error term This will make the error term smaller The resulting F ratio can
be more sensitive to the effects of the IV
Like the structural model, partitioning (dividing up) the total variance is an extension
of the independent groups case
partitioned out individual variablilty so it dosnt play a role
Assumptions
normality
for each condition (cell)
Homogeneity of variance
Fmax = largest condition variance diveded by smallest
condition variance. Get variance from descriptives
and sqaure the SD
Sphericity -mauchlys W
The variability in the differences between any pair of
conditions is the same as the variability in the differences
between any other pair of conditions
The assumption of sphericity Only relevant when there are three or more levels
of the independent variable The variability in the differences between any pair
of conditions is the same as the variability in the differences between any other
pair of conditions OR, IN OTHER WORDS There is equality in the correlations (or
more strictly the covariances) between each pair of conditions E.g., the
correlation (covariance) between the No Sound and Repeated Tone conditions is
the same as between the No Sound and Music conditions, and the Music and
Backward Speech conditions, and the……
Sphericity: There is equality in the correlations (or more strictly the covariances) between
each pair of conditions
If the sphericity assumption is violated Recommended practice is to
use an adjusted degrees of freedom to evaluate the F statistic The
adjustment will increase the p-value associated with the F statistic
The two main adjustments are: Greenhouse-Geisser (more
conservative) Huynh-Feldt (less conservative) Both adjustments
are automatically provided when you run the repeated measures
ANOVA in SPSS
partioning variance
total variance
between subjects(individual variabilty
within subjects variability
between conditions (effect of experimental condition/IV)
SStotal- deviations of each score from the grand mean (DFtotal =N-1) total variation in the scores
SSbetween subjects- Deviation of each subject's (participantion) mean from the grand mean -
DFbetween N-1 (variability due to individaul diffreence ( not relevent to F)
SSbetween conditions -Deviation of each condition’s mean from the grand mean. DFbetween
conditions k-1- Variability due to the effect of the experimental manipulation