Some nonlinear regression programs report the chi-square of a fit. What does this mean? Why doesn't Prism report the chi-square value?
Chi-square calculations are used to compare observed and expected values. Usually, these calculations are used in the context of categorical outcomes, to compare observed and expected distribution of subjects among the categories.
The use of chi-square in nonlinear regression is quite different. Regression finds the curve that minimizes the scatter of points around the curve (more details below). If you know a lot about the scatter of the data, you can compare the amount of scatter you'd expect to see (based on the variation among replicates) with the amount you actually observed (based on the distances of the points from the curve) and reduce the result to a chi-square value. If this chi-square value is high, then the scatter around the curve is larger than you'd expect, which might lead you to conclude that you've fit the wrong model.
That is the big picture. Now let's fill in some details.
Nonlinear regression minimizes the sum of the squared vertical distances between the data point and the curve. In other words, nonlinear regression adjusts the parameters of the model to minimize the sum of (Ydata- Ycurve)2. If you choose, you can apply weighting factors to adjust for systematic differences in the scatter of replicates as Y increases.
To find the best-fit values of the parameters, nonlinear regression minimizes the sum-of-squares. But how can you interpret the sum-of-squares? You can't really, as it depends on the number of data points you collected and the units you used to express Y. The value of sum-of-squares can be used to compute R2. This value is computed by comparing the sum-of-squares (a measure of scatter of points around the curve) with the total variation in Y values (ignoring X, ignoring the model). What values of R2 do you expect? How low a value is too low? You can't really answer that in general, as the answer depends on your experimental system.
Is the sum-of-squares too high? Too high compared to what? If you have collected replicate Y values at each value of X, you can compare the sum-of-squares with a value predicted from the scatter among replicates. Prism 5 does this calculation, which we call the replicates test.
Another approach to normalizing the sum-of-squares value is to compare the observed scatter of the points around the curve (the sum-of-squares) with the amount of experimental scatter you expect to see based theory. This is done by computing chi-square using this equation:
The chi-square is the sum of the square of the distances of the points from the curve, divided by the predicted standard deviation at that value of X.
If you know that the SD is the same for all values of X, this simplifies to:
Those standard deviation values must be computed from lots of data. The second equation is probably more useful, as you can compute the SD from lots of replicates. It is not a good idea to compute the SD values from the replicates you collected in this one experiment. Unless you have lots (certainly more than a dozen) replicates at each X value, you simply won't know the SD values with sufficient accuracy to make the computation of chi-square helpful. (Use the replicates test instead)
With regular weighted nonlinear regression (which Prism can do), you only have to know the relative weights. It is enough to know that the standard deviation of the scatter is proportional (for example) to the Y value. You don't have to actually know the SD values. This is enough to find the best-fit curve, but not enough to compute chi-square. To compute the chi-square, you have to know the predicted standard deviation at any value of X.
If you assume that replicates are scattered according to a Gaussian distribution with the SD you entered, and that you fit the data to the correct model, then the value of chi-square computed from that equation will follow a known chi-square distribution. This distribution depends on the number of degrees of freedom, which equals the number of data points minus the number of parameters. Knowing the value of chi-square and the number of degrees of freedom, you can compute a P value using this GraphPad QuickCalc web calculator, an Excel formula, or a statistical table.
The P value answers this question: If all the assumptions are true, what is that chance of obtaining a chi-square value this large or larger? Thus if the P value is small, either a rare coincidence has occurred or you can conclude one of the following:
- You picked the wrong model. The scatter of data around the curve is more than you'd expect to see, so the model must not follow the data very well.
- The values of standard deviation you entered are wrong (too small).
- The scatter doesn't really follow a Gaussian distribution.
If you are quite sure the scatter really is Gaussian, and that you entered the correct values for SD, then the chi-square is helpful. A large chi-suqare, so a small P value, tells you that your model is not right -- that the curve really doesn't follow the data very well. You should seek a better model.
But often the high value of chi-square and low P value just tells you that you don't know the SD as well as you think you do. It is hard to determine the SD values precisely, so hard to interpret the chi-square value. For this reason, we don't attempt the chi-square computation within Prism. We fear it would be more misleading than helpful.
In summary, the chi-square compares the actual discrepancies between the data and the curve with the expected discrepancies (assuming you selected the right model) based on the known SD among replicates. If the discrepancy is high, then you have some evidence that you've picked the wrong model. The advantage of the chi-square calculation is that it tests the appropriateness of a single model (without having to propose an alternative). The disadvantage is that the calculation depends on knowing the SD values with sufficient precision, which is often not the case.
We prefer to compare the fits of two models, rather than using chi-square to test whether the fit of one model is adequate.
Keywords: chi square