Documentation

This is machine translation

Translated by
Mouseover text to see original. Click the button below to return to the English version of the page.

Information Criteria

Model comparison tests—such as the likelihood ratio, Lagrange multiplier, or Wald test—are only appropriate for comparing nested models. In contrast, information criteria are model selection tools that you can use to compare any models fit to the same data. That is, the models being compared do not need to be nested.

Basically, information criteria are likelihood-based measures of model fit that include a penalty for complexity (specifically, the number of parameters). Different information criteria are distinguished by the form of the penalty, and can prefer different models.

Let $\mathrm{log}L\left(\stackrel{^}{\theta }\right)$ denote the value of the maximized loglikelihood objective function for a model with k parameters fit to N data points. Two commonly used information criteria are:

• Akaike information criterion (AIC). The AIC compares models from the perspective of information entropy, as measured by Kullback-Leibler divergence. The AIC for a given model is

`$-2\mathrm{log}L\left(\stackrel{^}{\theta }\right)+2k.$`

When comparing AIC values for multiple models, smaller values of the criterion are better.

• Bayesian information criterion (BIC). The BIC, also known as Schwarz information criterion, compares models from the perspective of decision theory, as measured by expected loss. The BIC for a given model is

`$-2\mathrm{log}L\left(\stackrel{^}{\theta }\right)+k\mathrm{log}\left(N\right).$`

When comparing BIC values for multiple models, smaller values of the criterion are better.

Note

Some references scale information criteria values by the number of observations (N). Econometrics Toolbox™ does not do this scaling. As a result, the absolute value of measures the toolbox returns might differ from other sources by a factor of N.