Note: This page has been translated by MathWorks. Click here to see

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

For conditional variance models, the innovation process is $${\epsilon}_{t}={\sigma}_{t}{z}_{t},$$ where *z _{t}* follows a
standardized Gaussian or Student’s

`Distribution`

.The innovation variance, $${\sigma}_{t}^{2},$$ can follow a GARCH, EGARCH, or GJR conditional variance process.

If the model includes a mean offset term, then

$${\epsilon}_{t}={y}_{t}-\mu .$$

The `estimate`

function for `garch`

,
`egarch`

, and `gjr`

models estimates
parameters using maximum likelihood estimation. `estimate`

returns
fitted values for any parameters in the input model equal to `NaN`

.
`estimate`

honors any equality constraints in the input model,
and does not return estimates for parameters with equality constraints.

Given the history of a process, innovations are conditionally
independent. Let *H _{t}* denote
the history of a process available at time

$$f({\epsilon}_{1},{\epsilon}_{2},\dots ,{\epsilon}_{N}|{H}_{N-1})={\displaystyle \prod _{t=1}^{N}f(}{\epsilon}_{t}|{H}_{t-1}),$$

where *f* is a standardized
Gaussian or *t* density function.

The exact form of the loglikelihood objective function depends on the parametric form of the innovation distribution.

If

*z*has a standard Gaussian distribution, then the loglikelihood function is_{t}$$LLF=-\frac{N}{2}\mathrm{log}(2\pi )-\frac{1}{2}{\displaystyle \sum _{t=1}^{N}\mathrm{log}{\sigma}_{t}^{2}-}\frac{1}{2}{\displaystyle \sum _{t=1}^{N}\frac{{\epsilon}_{t}^{2}}{{\sigma}_{t}^{2}}}.$$

If

*z*has a standardized Student’s_{t}*t*distribution with $$\nu >2$$ degrees of freedom, then the loglikelihood function is$$LLF=N\mathrm{log}\left[\frac{\Gamma \left(\frac{\nu +1}{2}\right)}{\sqrt{\pi (\nu -2)}\Gamma \left(\frac{\nu}{2}\right)}\right]-\frac{1}{2}{\displaystyle \sum _{t=1}^{N}\mathrm{log}{\sigma}_{t}^{2}}-\frac{\nu +1}{2}{\displaystyle \sum _{t=1}^{N}\mathrm{log}\left[1+\frac{{\epsilon}_{t}^{2}}{{\sigma}_{t}^{2}(\nu -2)}\right]}.$$

`estimate`

performs covariance matrix estimation for
maximum likelihood estimates using the outer product of gradients
(OPG) method.

[1]
Bollerslev, T. “Generalized Autoregressive Conditional Heteroskedasticity.” *Journal of Econometrics*. Vol. 31, 1986, pp. 307–327.

[2]
Bollerslev, T. “A Conditionally Heteroskedastic Time Series Model for Speculative Prices and Rates of Return.” *The Review of Economics and Statistics*. Vol. 69, 1987, pp. 542–547.

[3]
Engle, R. F. “Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation.” *Econometrica*. Vol. 50, 1982, pp. 987–1007.

[4]
Hamilton, J. D. *Time Series Analysis*. Princeton, NJ: Princeton University Press, 1994.

- Likelihood Ratio Test for Conditional Variance Models
- Compare Conditional Variance Models Using Information Criteria