Many of the same ideas for testing best-fit values apply from multiple linear regression. Prism will optionally report standard errors, confidence intervals and P values for each secoefficient estimate. These values can be used to assess how stable the coefficient estimates are. Large standard errors, which subsequently mean large confidence intervals, imply that there is considerable uncertainty with the point estimates. The P values provide an assessment for whether the true value of the coefficient is equal to zero.
Prism offers two ways to evaluate the linear dependence of predictors in multiple logistic regression. You may evaluate multicollinearity using variance inflation factors or evaluate pairwise correlation with the correlation matrix. See here for details.
Selecting these options provides the raw output for the corrected Akaike Information Criterion (AICc), log-likelihood, or the model deviance.
Prism has easier ways to compare models in two special cases. First, if you wish to compare your model to a model with only an intercept, then the easiest way to do this in Prism is to run hypothesis tests in the Goodness-of-fit tab.
Second, if you wish to compare two different logistic regression models, you can use the Compare tab.
If neither of those options meet your need, and you just want the raw numbers, then select the desired box in this section.
Specify the confidence level Prism should use when reporting values in the results
Labels for "Negative" and "Positive" outcomes: To make the results easier to interpret, text labels (such as "presence" and "absence", “yes” and “no”, or “alive” and “dead”) may be added here for the dependent (Y) variable. These text labels will be used in the results output of the model fit. If a categorical variable is used as the dependent variable, it is unlikely that these will need to be changed.