The choices vary depending on what fitting method you chose in the Methods tab.
•R2 is the standard way to assess goodness-of-fit.
•The adjusted R2 takes into account the number of parameters fit to the data, so has a lower value than R2 (unless you fit only one parameter, in which case R2 and adjusted R2 are identical). It is not commonly reported with nonlinear regression.
•The sum-of-squares (or weighted sum-of-squares) is the value that Prism minimizes when it fits a curve. Reporting this value is useful only if you want to compare Prism's results to those of another program, or you want to do additional calculations by hand.
•Sy.x and RMSE are alternative ways to quantify the standard deviation of the residuals. We recommend the Sy.x, which is also called Se.
•The AICc is useful only if you separately fit the same data to three or more models. You can then use the AICc to choose between them. But note that it only makes sense to compare AICc between fits when the only difference is the model you chose. If the data or weighting are not identical between fits, then any comparison of AICc values would be meaningless.
If you chose robust regression, Prism can compute a Robust Standard Deviation of the Residuals (RSDR). The goal here is to compute a robust standard deviation, without being influenced by outliers. In a Gaussian distribution, 68.27% of values lie within one standard deviation of the mean. We therefore calculate the range that contains 68.27% of the residuals. It turns out that this value underestimates the SD a bit, so the RSDR is computed by multiplying that empirical value by n/(n-K), where K is the number of parameters fit.
If you chose Poisson regression, Prism offers three ways to quantify the goodness of fit: the pseudo-R2, the dispersion index and the model deviance. The pseudo-R2 can be interpreted pretty much like an ordinary R2. The other two values will be of interest only to those who have studied Poisson regression in depth.
Least-squares nonlinear regression assumes that the distribution of residuals follows a Gaussian distribution (robust nonlinear regression does not make this assumption). Prism can test this assumption by running a normality test on the residuals. Prism offers four normality tests. We recommend the D'Agostino-Pearson test.
These tests make no sense and are not available if you chose robust or Poisson regression.
Does the curve follow the trend of the data? Or does the curve systematically deviate from the trend of the data? Prism offers two tests that answer these questions.
If you have entered replicate Y values, choose the replicates test to find out if the points are 'too far' from the curve (compared to the scatter among replicates). If the P value is small, conclude that the curve does not come close enough to the data.
The runs test is available if you entered single Y values (no replicates) or chose to fit only the means rather than individual replicates (Method tab). A 'run' is a series of consecutive points on the same side of the curve. If there are too few runs, it means the curve is not following the trend of the data. If you fit several curves at once, sharing one or more parameters (but not all the parameters) with global regression, Prism will report the runs test for each curve fit, but not for the global fit. Prior versions of Prism reported the overall runs test for the global fit, by summing the runs for each component curve, but this is not standard and Prism no longer reports this.
Nonlinear regression assumes that, on average, the distance of the points from the curve is the same all the way along the curve, or that you have accounted for systematic differences by choosing an appropriate weighting. Prism can test this assumption with a test for appropriate weighting. If you have chosen equal weighting (the default) this is the same as a test for homoscedasticity.
If you choose a residual plot, Prism creates a new graph. Prism offers a choice of four different residual graphs. Viewing a residual plot can help you assess whether the distribution of residuals is random above and below the curve.
What does it mean for parameters to be intertwined? After fitting a model, change the value of one parameter but leave the others alone. The curve moves away from the points. Now, try to bring the curve back so it is close to the points by changing the other parameter(s). If you can bring the curve closer to the points, the parameters are intertwined. If you can bring the curve back to its original position, then the parameters are redundant. In this case, Prism will alert you by labeling the fit 'ambiguous'.
We suggest that you report the dependency, and not bother with the covariance matrix. When you are getting started with curve fitting, it is OK to leave both options unchecked.
Even though nonlinear regression, as its name implies, is designed to fit nonlinear models, some of the inferences actually assume that some aspects of the model are close to linear, so that the distribution of each parameter is symmetrical. This means that if you analyzed many data sets sampled from the same system, the distribution of the best-fit values of the parameter would be symmetrical and Gaussian.
If the distribution of a parameter is highly skewed, the reported SE and CI for that parameter will not be very useful ways of assessing precision. Hougaard's measure of skewness quantifies how skewed each parameter is.