"All models are wrong, but some are useful"
-George E. P. Box
The quote above emphasizes the fact that statistical models rarely (if ever) capture the true complexity of a system or population. Instead, models are used to simplify the truth down to a form that is more easily interpreted and understood. The consequence is that there may be many different models that could be proposed as a simplification of the same system or population. Because of this, a lot of what we generally look at as “interpretation” of the results of a single model is actually cleverly disguised comparisons of different competing models. For example, one common comparison that is made is between the model specified in the analysis and the so-called “empty” or “null” model. This null model is simply a model that contains no predictor variables, and - when compared to the specified model in the analysis - can be used to determine the relative importance of the predictor variables included in the specified model, or to assess the overall “fit” of the specified model. For example, the values in the model diagnostics section of the results (AIC, partial log likelihood, negative two times the partial log-likelihood, and pseudo R-squared) are commonly used to compare the specified model to the null model.
In contrast, the controls on the Compare tab of the analysis parameters dialog are designed to compare two models specified by the user (two competing models). These controls work similarly to other types of multiple regression including multiple linear regression and multiple logistic regression. The options available on this tab allow you to specify a second model (with different combinations of predictor variables, interaction terms, and/or transformations), and compare how well each model fits the entered data. As described above, these controls should not be used to try and specify the “null model” for comparison, but should only be used to compare two models that contain predictor variables. Read more about comparisons to the null model in the section “Comparison to the null model” below.
Prism offers two different methods to assess the comparison of the model specified on the model tab and the model specified on the compare tab. These methods are Akaike's Information Criterion (AIC), and the Likelihood ratio test.
AIC is an information theory approach that is used to determine how well the data supports each model, taking into account the partial log-likelihood for each model as well as the number of parameters contained within each model. The option to report the partial log-likelihood (and negative two times the partial likelihood) for the selected model is also available on the Options tab of the Cox proportional hazards regression dialog. The results are expressed as the probability that each model is correct, with the probabilities summing to 100%. Obviously, this method does not consider the possibility that a different model is correct. It just compares the two models it is asked to compare. If one model is much more likely to be correct than the other (say 1% vs. 99%), you will want to choose it. However, if the difference is not very big (say 40% vs. 60%), you wouldn’t know for sure which model was better, and you’d want to collect more data. Importantly, AIC can be used to compare any two models on the same data set. The formula used to calculate AIC is relatively simple:
AIC = -2*(partial log-likelihood)+ 2*k,
where k is the number of model parameters.
This test reports the difference of AIC values calculated for each of the selected models. More information on how AIC is calculated for Cox proportional hazards regression is given here. Note that because it is difficult to specify the number of observations in the presence of censoring, Prism - like other applications - only reports AIC, and not the corrected AIC (AICc).
Similar to the AIC, the LRT also uses the partial log-likelihood to determine which model is preferred. However, unlike AIC, LRTs are only appropriate for testing when one model is a reduced version of the other. Another way to describe this scenario is to say that the two models are “nested”. Although this test is only valid when the models are nested, Prism will not check if the models are nested. Therefore, you must be careful when selecting to compare two models using this test.
The test statistic is calculated as a scaled difference between the partial log-likelihood of the simpler model (the model with fewer parameters) and the more complex model (the model with more parameters):
LRT statistic = -2*[partial log-likelihood(simpler model)]-partial log-likelihood(more complex model)]
Adding parameters to a model (very nearly) always increases the partial log-likelihood of the model. Thus, this test statistic looks at how much more "likely" it is that the data could be generated from the simpler model compared to the complex model. The value of this statistic is used to calculate a P value. A small P value suggests rejecting the null hypothesis that the simpler model is correct. In Prism, you can specify how small the P value must be to reject this null hypothesis (with a default of 0.05).
Finally on the Compare tab of the Cox proportional hazards regression dialog, the main effects, interactions, and transforms for the second model must be defined. In many cases, the second model will be nested within the first model (i.e. it will use a subset of the effects, interactions, and transforms of the first). If this is the case, the second model will be a “simpler” model. Note that Cox proportional hazards regression models do not include an intercept term, so this will not be an option.
A comparison that is often very useful when performing Cox proportional hazards regression is the comparison of the specified model with the “null model” (this is a model that contains no predictor variables at all). However, you do not need to use the options on the Compare tab to set up this comparison. Indeed, Prism will automatically present the results of the comparison with the null model as part of the standard output for any model specified on the Model tab. By default, the AIC values for the specified model and the null model are reported in the model diagnostics section of the Tabular results, and the options to report the partial log-likelihood or negative two times the partial log-likelihood for the specified and null model (used to calculate the likelihood ratio test statistic) can be found on the Options tab of the analysis dialog.