Please enable JavaScript to view this site.

Navigation: STATISTICS WITH PRISM 11

General ANOVA Overview

Scroll Prev Top Next More

What is ANOVA?

Analysis of Variance (ANOVA) is a statistical method for comparing the means of three or more groups. ANOVA works by partitioning the total variation in your data into different components, then comparing the variation between groups to the variation within groups.

Why not just run multiple t-tests?

If you want to compare three groups, you might be tempted to run three separate t-tests (Group A vs. B, A vs. C, and B vs. C). The problem with this approach is that each test has a chance of a false positive (Type I error, determined by the specified value of alpha that you choose). When you run multiple tests, these probabilities accumulate, increasing your overall chance of incorrectly concluding that groups differ. ANOVA solves this problem by testing all groups simultaneously with a single test that maintains the Type I error rate.

The core idea behind ANOVA

ANOVA compares two types of variation:

Between-group variation: How much do the group means differ from each other?

Within-group variation: How much do individual values vary within each group?

If the between-group variation is substantially larger than the within-group variation, this suggests that group membership (your factor) has a real effect on the outcome. ANOVA quantifies this comparison with an F-ratio and P value.

When is ANOVA appropriate?

ANOVA is designed for:

Continuous outcome variables measured on an interval or ratio scale

Categorical predictor variables (factors) that define your groups

Three or more groups to compare (though ANOVA technically works with two groups, a t-test is more common)

Independent observations (unless using repeated measures designs)

 

© 1995-2019 GraphPad Software, LLC. All rights reserved.