GraphPad Statistics Guide

Key facts about controlling the FDR

Key facts about controlling the FDR

Previous topic Next topic No expanding text in this topic  

Key facts about controlling the FDR

Previous topic Next topic JavaScript is required for expanding text JavaScript is required for the print function Mail us feedback on this topic!  

Prism uses the concept of False Discovery Rate as part of our method to define outliers (from a stack of values, or during nonlinear regression). Prism also can use the FDR method when calculating many t tests at once, when analyzing a stack of P values computed elsewhere, and as a multiple comparisons method following one-, two, or three-way ANOVA.

Key facts about the False Discovery Rate approach

This approach first computes a P value for each comparison. When used as a followup to ANOVA, the comparisons are done using the Fisher Least Significant Different approach (which by itself does not correct for multiple comparisons but does pool the variances to increase the number of degrees of freedom). When used to analyze a set of t tests, each t test is first computed individually. When analyzing a set of P values, of course you enter these P values directly.

The goal is explained here. You enter Q, the desired false discovery rate (as a percentage), and Prism then tells you which P values are low enough to be called a "discovery", with the goal of ensuring that no more than Q% of those "discoveries" are actually false positives.

Prism let's you choose one of three algorithms for deciding which P values are small enough to be a "discovery". The Benjamini and Hochberg method was developed first so is more standard. The Benjamani, Krieger and Yekutieli method have more power, so is preferred. The method of Benjamini & Yekutieli makes fewer assumptions, but has much less power.

This FDR approach does not use the concept or phrase "statistically significant" when a P value is small, but instead uses the term "discovery". (Some authors use terminology differently.)

The FDR approach cannot compute confidence intervals to accompany each comparison.

Q (note the upper case) is a value you enter as the desired FDR. Prism also computes q (lower case) for each comparison. This value q is the value of Q at which this particular comparison would be right on the border of being classified as a discovery or not. The value q depends not only on that one comparison, but on the number of comparisons in the family and the distribution of P values.

The q values Prism reports are FDR-adjusted p values, not FDR-corrected P values. This is a subtle distinction.

If all the null hypotheses are true,  there will be only a Q% chance that you find one or more discoveries (where Q is the false discovery rate you chose).

If all the P values are less than your chosen value of Q (correcting for the fact that P values are fractions and Q is a percentage), then all the comparisons will be flagged as discoveries. (This rule is not true when you choose the method of Benjamini & Yekutieli).

If all the P values are greater than your chosen value of Q, then no comparison will be flagged as a discovery.

The q values are generally larger than the corresponding P value. The exception is the q value for the comparison with the largest P value can have a q value equal to the P value.

The value of q is set by the P value for that comparison as well as the other P values and the number of comparisons. The value you enter for Q does not impact the computation of q.

The algorithms in Prism control the FDR, not the pFDR (which won't be explained here).

The q values determined by these methods tend to be higher (and are never lower) than the adjusted P values computed when using the usual multiple comparisons methods (Bonferroni, etc.).

Great nonmathematical review: Glickman, M. E., Rao, S. R., & Schultz, M. R. (2014). False discovery rate control is a recommended alternative to Bonferroni-type adjustments in health studies. Journal of Clinical Epidemiology, 67(8), 850–857.