Fitting a mixed effects model to repeated-measures one-way data compares the means of three or more matched groups. The term repeated-measures strictly applies only when you give treatments repeatedly to each subject, and the term randomized block is used when you randomly assign treatments within each group (block) of matched subjects. The analyses are identical for repeated-measures and randomized block experiments, and Prism always uses the term repeated-measures.
Read aboutusing the mixed model to fit repeated measures data.
There is one fixed effect in the model, the variable that determines which column each value was placed into. The mixed effects model results present a P value that answers this question:
If the overall P value is large, the data do not give you any reason to conclude that the means differ. Even if the true means were equal, you would not be surprised to find means this far apart just by chance. This is not the same as saying that the true means are the same. You just don't have compelling evidence that they differ.
If the overall P value is small, then it is unlikely that the differences you observed are due to random sampling. You can reject the idea that all the populations have identical means. This doesn't mean that every mean differs from every other mean, only that at least one differs from the rest. Look at the results of post tests to identify where the differences are.
The mixed effects model treats the different subjects (participants, litters, etc) as a random variable. The residual random variation is also random. Prism presents the variation as both a SD and a variance (which is the SD squared). You, or more likely your statistical consultant, may be interested in these values to compare with other programs. The calculation of these values is complicated requiring matrix algebra.
A repeated-measures experimental design can be very powerful, as it controls for factors that cause variability between subjects. If the matching is effective, the repeated-measures test will yield a smaller P value than an ordinary ANOVA. The repeated-measures test is more powerful because it separates between-subject variability from within-subject variability. If the pairing is ineffective, however, the repeated-measures test can be less powerful because it has fewer degrees of freedom.
Prism tests whether the matching was effective and reports a P value. This P value comes from a chi-square statistic that is computed by comparing the fit of the full mixed effects model to a simpler model without accounting for repeated measures. If this P value is low, you can conclude that the matching was effective. If the P value is high, you can conclude that the matching was not effective and should reconsider your experimental design.
Prism optionally expresses the goodness-of-fit in a few ways. These will only be meaningful to someone who understand mixed effects models deeply. Most scientists will ignore these results or uncheck the option so they don't get reported.
If you checked the option to not accept the assumption of sphericity, Prism does two things differently.
•It applies the correction of Geisser and Greenhouse. You'll see smaller degrees of freedom, which usually are not integers. The corresponding P value is higher than it would have been without that correction.
•It reports the value of epsilon, which is a measure of how badly the data violate the assumption of sphericity.
Learn about multiple comparisons tests after repeated measures ANOVA.
Before interpreting the results, review the analysis checklist.