Please enable JavaScript to view this site.

This guide is for an old version of Prism. Browse the latest version or update Prism

What if you wish to test for differences in best-fit parameters between three or more  data sets, on the basis of a single experiment? You don't just want to know if all the curves are the same. You want to use multiple comparisons to compare pairs of curves, focusing on a particular parameter. Prism lets you do this in two ways:

ANOVA approach -- statistical significance

1.Perform the nonlinear regression analysis. Record from the Results sheet the best-fit values for the parameter you are comparing , perhaps the logEC50 of a dose response curve.

2. Also record the standard errors for those parameters and the degrees of freedom for each curve (which equals the number of data points minus the number of variables fit).

3.Create a new Grouped table in Prism, formatted for entry of "Mean, Standard Error, N". You will enter values only into the first row of this table.

4.For each data set, enter the best-fit value of the parameter (i.e. logEC50) in the "Mean"column.

5.Enter the standard error of the best-fit value in the "SEM" column.

6.For N, enter one more than the degrees of freedom for that fit. (Why enter df+1 into the "N" column? The ANOVA calculations don't actually care about the value of N. Instead, they are based on df. Prism subtracts 1 from the value you enter as N, and uses that as dfF. Since you enter df+1, Prism ends up using the correct df value.)

7.Click Analyze and choose one-way ANOVA along with an appropriate multiple comparisons test.

Compare two curves at a time with nonlinear regression

You can rerun the analysis comparing two data sets (curves) at a time. The easiest way to do this is to duplicate the results of the main analysis (New..Duplicate sheet) and then remove all but two data sets from that new analysis. Another approach is to keep one data table, click Analyze, choose nonlinear regression and on the right side of that dialog choose which two data sets to compare.

There are two approaches to use when comparing fits, the extra sum-of-squares F test and the AICc approach.

With statistical significance (extra sum-of-squares F test) approach, there is a traditional (albeit totally arbitrary) cutoff at P=0.05. But if you are doing many comparisons, you should correct for the multiple comparisons. Divide 0.05 (or whatever overall value you want) by the number of pairs of analyses you are comparing, to come up with a new stricter cut off for declaring a P value to be small enough that you can call the comparison "significant".

The AIC approach to comparing curves is not based on statistical hypothesis testing, and is not confused by multiple comparisons. There are two ways to use this approach:

In the Diagnostics tab of nonlinear regression of Prism 6, check the option to report the AICc of each curve.Then you can do manual calculations or comparisons with those AICc values.

Run the nonlinear regression with two data sets at a time, and use the AIC approach to ask how strong the evidence is that the parameter you care about (logEC50) differs between data sets. The AIC calculations just give you the likelihood that the parameter is the same in both data sets vs. different. You need to decide when those likelihood are far enough apart that you will believe the parameters are different.

© 1995-2019 GraphPad Software, LLC. All rights reserved.