Note: This page has been translated by MathWorks. Click here to see

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Linear prediction filter coefficients

`[a,g] = lpc(x,p)`

`lpc`

determines the coefficients of a forward
linear predictor by minimizing the prediction error in the least squares
sense. It has applications in filter design and speech coding.

`[a,g] = lpc(x,p)`

finds the coefficients of
a `p`

th-order linear predictor (FIR filter) that
predicts the current value of the real-valued time series `x`

based
on past samples.

$$\widehat{x}(n)=-a(2)x(n-1)-a(3)x(n-2)-\cdots -a(p+1)x(n-p)$$

`p`

is the order of the prediction filter polynomial, `a`

= `[1 a(2) ... a(p+1)]`

. If `p`

is unspecified, `lpc`

uses
as a default `p`

= `length(x)-1`

.
If `x`

is a matrix containing a separate signal in
each column, `lpc`

returns a model estimate for each
column in the rows of matrix `a`

and a column vector
of prediction error variances `g`

. The length of `p`

must
be less than or equal to the length of `x`

.

`lpc`

uses the autocorrelation method of autoregressive
(AR) modeling to find the filter coefficients. The generated filter
might not model the process exactly even if the data sequence is truly
an AR process of the correct order. This is because the autocorrelation
method implicitly windows the data, that is, it assumes that signal
samples beyond the length of `x`

are 0.

`lpc`

computes the least squares solution to

$$Xa=b$$

where

$$\begin{array}{ccc}X=\left[\begin{array}{cccc}x(1)& 0& \cdots & 0\\ x(2)& x(1)& \ddots & \vdots \\ \vdots & x(2)& \ddots & 0\\ x(m)& \vdots & \ddots & x(1)\\ 0& x(m)& \ddots & x(2)\\ \vdots & \ddots & \ddots & \vdots \\ 0& \cdots & 0& x(m)\end{array}\right],& a=\left[\begin{array}{c}1\\ a(2)\\ \vdots \\ a(p+1)\end{array}\right],& b=\left[\begin{array}{c}1\\ 0\\ \vdots \\ 0\end{array}\right]\end{array}$$

and `m`

is the length of `x`

.
Solving the least squares problem via the normal equations

$${X}^{H}Xa={X}^{H}b$$

leads to the Yule-Walker equations

$$\left[\begin{array}{cccc}r(1)& r{(2)}^{\ast}& \cdots & r{(p)}^{\ast}\\ r(2)& r(1)& \ddots & \vdots \\ \vdots & \ddots & \ddots & r{(2)}^{\ast}\\ r(p)& \cdots & r(2)& r(1)\end{array}\right]\left[\begin{array}{c}a(2)\\ a(3)\\ \vdots \\ a(p+1)\end{array}\right]=\left[\begin{array}{c}-r(2)\\ -r(3)\\ \vdots \\ -r(p+1)\end{array}\right]$$

where *r*` = [`

*r*(1) *r*(2) ... *r*(*p*+1)`]`

is
an autocorrelation estimate for `x`

computed using `xcorr`

.
The Yule-Walker equations are solved in *O*(*p*^{2})
flops by the Levinson-Durbin algorithm (see `levinson`

).

[1] Jackson, L. B. *Digital Filters
and Signal Processing*. 2nd Edition. Boston: Kluwer Academic
Publishers, 1989, pp. 255–257.