Comparing models works similarly to multiple linear regression.
On the Compare tab of the multiple logistic regression dialog, first specify the main effects, interactions, and transforms for the second model. In many cases, the second model will be nested within the first model (i.e. it will use a subset of the effects, interactions, and transforms of the first). If this is the case, the second model will be a “simpler” model.
For logistic regression, Prism offers two approaches for comparing models: Akaike's criterion (AICc) and Likelihood Ratio Tests (LRT).
AICc is an information theory approach using a corrected version of Akaike's criterion. This method determines how well the data supports each model, taking into account the model deviance for each model. The option to report model deviance for the selected model is also available on the Goodness-of-fit tab of the multiple logistic regression dialog. The results are expressed as the probability that each model is correct, with the probabilities summing to 100%. Obviously, this method does not consider the possibility that a different model is correct. It just compares the two models it is asked to compare. If one model is much more likely to be correct than the other (say 1% vs. 99%), you will want to choose it. However, if the difference is not very big (say 40% vs. 60%), you wouldn’t know for sure which model was better, and you’d want to collect more data. Read more about how these calculations work. Importantly, AICc can be used to compare any two models on the same data set.
This test reports the difference of AICc values calculated for each of the selected models. More information on how AICc is calculated for logistic regression is given here.
Similar to the AICc, the LRT also uses the model deviance to determine which model is preferred. However, unlike AICc, LRTs are only appropriate for testing when one model is a reduced version of the other. Another way to describe this scenario is to say that the two models are “nested”. Although this test is only valid when the models are nested, Prism will not check if the models are nested. Therefore, you must be careful when selecting to compare two models using this test.
The test statistic is calculated as the difference between the deviance of the simpler model (the model with fewer parameters) and the more complex model (the model with more parameters):
LRT statistic = Deviance(simpler model) - Deviance(more complex model)
Adding parameters to a model (very nearly) always decreases the model deviance. Thus, the test statistic is how much smaller the deviance is for the more complex model. The value of this statistic is used to calculate a P value. A small P value suggests rejecting the null hypothesis that the simpler model is correct. In Prism, you can specify how small the P value must be to reject this null hypothesis (with a default of 0.05).