The repeated measures one-way ANOVA in Prism compares the means of three or more groups for which the same subjects were measured (or were matched) in each group. Read elsewhere to learn about choosing a test, and interpreting the results.
The whole point of using a repeated-measures test is to control for experimental variability. Some factors you don't control in the experiment will affect all the measurements from one subject equally, so will not affect the difference between the measurements in that subject. By analyzing only the differences, therefore, a matched test controls for some of the sources of scatter.
The matching should be part of the experimental design and not something you do after collecting data. Prism tests the effectiveness of matching with an F test (distinct from the main F test of differences between columns). If the P value for matching is large (say larger than 0.05), you should question whether it made sense to use a repeated-measures test. Ideally, your choice of whether to use a repeated-measures test should be based not only on this one P value, but also on the experimental design and the results you have seen in other similar experiments.
The results of repeated-measures ANOVA only make sense when the subjects are independent. Prism cannot test this assumption. You must think about the experimental design. For example, the errors are not independent if you have six rows of data, but these were obtained from three animals, with duplicate measurements in each animal. In this case, some factor may affect the measurements from one animal. Since this factor would affect data in two (but not all) rows, the rows (subjects) are not independent.
Repeated-measures ANOVA assumes that each measurement is the sum of an overall mean, a treatment effect (the average difference between subjects given a particular treatment and the overall mean), an individual effect (the average difference between measurements made in a certain subject and the overall mean) and a random component. Furthermore, it assumes that the random component follows a Gaussian distribution and that the standard deviation does not vary between individuals (rows) or treatments (columns). While this assumption is not too important with large samples, it can be important with small sample sizes. Prism does not test for violations of this assumption.
The null hypothesis for this test assumes that the means of all groups are equal. The F statistic for this test is the ratio of variance between the group means, and the variance within the groups. If this statistic is large, it suggests that the group means are not the same (given the amount of variance in the data within each group). As the F statistic increases, the corresponding P value decreases. If the F statistic is large enough (the P value small enough), you can reject this null hypothesis.
One-way ANOVA compares three or more groups defined by a single grouping factor. In this design, each group represents one level of the fixed factor. For example, you might compare three groups: a control group, a drug treatment group, and a group treated with drug plus antagonist. Or you might compare a control group with five other groups that each receive a different drug treatment. In both cases, the groups are levels of a single "treatment" factor.
Some experiments involve more than one factor. For example, you might compare three different drugs in men and women. There are two factors in that experiment: drug treatment and gender. Similarly, there are two factors if you wish to compare the effect of drug treatment at several time points. These data need to be analyzed by two-way ANOVA, also called two-factor ANOVA.
Prism's "repeated measures one-way ANOVA" is a bit of a misnomer. In reality, the analysis is a mixed effects analysis with two factors: one fixed and one random. The grouping factor in the analysis (the one that defines the groups being compared) should be a fixed factor. In other words, this test investigates differences among the means of the particular groups you have collected data from. If the grouping factor was a random factor, the analysis would assume that you have randomly selected groups from an infinite (or at least large) number of possible groups, and that you want to reach conclusions about differences among ALL the groups, even the ones you didn't include in this experiment. This sort of analysis is known as a "random effects model", is much more uncommon, and Prism does not provide options to use this sort of model within this analysis.
There is a second factor in the repeated measures one-way ANOVA, and that is the subject factor. In this design, the same subject (or a matched subject) is measured under each group condition. A subject factor is needed to assign the values in each group to a particular subject. This factor is a "random factor" because the subjects that you're using are only representative of the larger population from which they came. You want to account for variability in the measurements due to the fact that you used the same subject in each group, but you aren't interested in these specific subjects. In the statistics vernacular, because the subjects of this random factor show up in each group of the fixed factor, the random factor is called a "crossed random effect", and the analysis design is called a "mixed effects model" (because it includes both fixed and random effects).
Repeated-measures ANOVA assumes that the random error truly is random. A random factor that causes a measurement in one subject to be a bit high (or low) should have no affect on the next measurement in the same subject. This assumption is called circularity or sphericity. It is closely related to another term you may encounter, compound symmetry.
Repeated-measures ANOVA is quite sensitive to violations of the assumption of sphericity. If the assumption is violated, the P value will be too small. One way to violate this assumption is to make the repeated measurements in too short a time interval, so that random factors that cause a particular value to be high (or low) don't wash away or dissipate before the next measurement. To avoid violating the assumption, wait long enough between treatments so the subject is essentially the same as before the treatment. When possible, also randomize the order of treatments.
You only have to worry about the assumption of sphericity when you perform a repeated-measures experiment, where each row of data represents repeated measurements from a single subject. It is impossible to violate the assumption with randomized block experiments, where each row of data represents data from a matched set of subjects.
If you cannot accept the assumption of sphericity, you can specify that on the Parameters dialog. In that case, Prism will take into account possible violations of the assumption (using the method of Geisser and Greenhouse) and report a larger P value.
Starting with Prism 8, repeated measures ANOVA can be calculated with missing values by fitting a mixed effects model. But the results can only be interpreted if the reason for the value being missing is random. If a value is missing because it was too high to measure (or too low), then it is not missing randomly. If values are missing because a treatment is toxic, then the values are not randomly missing.