KNOWLEDGEBASE - ARTICLE #1031

Interpreting 'not significant' P values using the counternull value.

Many students and scientists find it difficult to interpret a "not significant" P value. It is tempting to interpret "not statistically significant" as meaning that the data prove the treatment had no effect. This is invalid.

One approach to calculate (Prism and InStat do it for you) a 95% confidence interval for the treatment effect, and to interpret all the values in that range in a scientific context. If all those values are scientifically (clinically, practically…) irrelevant, then you have pretty solid evidence that the treatment does not cause an important change.

Another approach, that StatMate helps you with, is to compute the power of your study (given its sample size) to detect various hypothetical differences. If the study had a high power to detect the smallest effect that you would consider important, then your negative results are pretty strong evidence that the treatment does not cause an important effect. If the study has low power to detect an effect that you'd consider important, then the "not significant" results are ambiguous.

Rosenthal has proposed another approach to interpreting "not significant" results:

Before you conclude that there really is no difference, consider the fact that your data are just as consistent with a real effect equal to twice your observed effect as it is with a null hypothesis of no effect. An effect twice the size you observed is the counternull value(1).

Recall the definition of a P value. First you hypothesize a null hypothesis. This is conventionally stated to be that there is no treatment effect, so the difference is really zero (or the relative risk is 1.0…). The P value then answers this question: If the true difference in the overall population really is the value specified in the null hypothesis, what is the probability that random sampling would lead to results as far (or further) from the null hypothesis as observed in this experiment?

If you compare two means, the P value is computed from the t ratio which is the absolute value of the difference between the observed treatment effect (difference between the two means) and the null hypothesis value, divided by the standard error of the difference. You'd get exactly the same t ratio if the null hypothesis was set to twice the observed difference. This is the counter null value. Your data are equally consistent with the true population difference equal to zero (the null hypothesis) and a value equal to twice the observed difference (the counter null hypothesis).

 

Reference: Rosenthal R, and Rubin DB. Psychological Science, 5:329-334, 1994.



Keywords: counter null

Explore the Knowledgebase

Analyze, graph and present your scientific work easily with GraphPad Prism. No coding required.