# Documentation

### This is machine translation

Translated by
Mouse over text to see original. Click the button below to return to the English verison of the page.

# eigs

Subset of eigenvalues and eigenvectors

## Syntax

• ``d = eigs(A)``
example
• ``d = eigs(A,k)``
example
• ``d = eigs(A,k,sigma)``
example
• ``d = eigs(A,k,sigma,opts)``
example
• ``d = eigs(A,B,___)``
example
• ``d = eigs(Afun,n,___)``
• ``````[V,D] = eigs(___)``````
example
• ``````[V,D,flag] = eigs(___)``````
example

## Description

example

````d = eigs(A)` returns a vector of the six largest magnitude eigenvalues of matrix `A`.```

example

````d = eigs(A,k)` returns the `k` largest magnitude eigenvalues.```

example

````d = eigs(A,k,sigma)` returns `k` eigenvalues based on the value of `sigma`. For example, `eigs(A,k,'sm')` returns the `k` smallest magnitude eigenvalues.```

example

````d = eigs(A,k,sigma,opts)` additionally specifies options using a structure.```

example

````d = eigs(A,B,___)` solves the generalized eigenvalue problem `A*V = B*V*D`. You can optionally specify `k`, `sigma`, or `opts` as additional input arguments.```
````d = eigs(Afun,n,___)` specifies a function handle, `Afun`, instead of a matrix, `A`. You can optionally specify `B`, `k`, `sigma`, or `opts` as additional input arguments.```

example

``````[V,D] = eigs(___)``` returns diagonal matrix `D` containing the eigenvalues on the main diagonal, and matrix `V` whose columns are the corresponding eigenvectors. You can use any of the input argument combinations in previous syntaxes.```

example

``````[V,D,flag] = eigs(___)``` also returns a convergence flag. If `flag` is `0`, then all the eigenvalues converged.```

## Examples

collapse all

Compute the six largest magnitude eigenvalues of a sparse matrix.

```A = delsq(numgrid('C',15)); d = eigs(A) ```
```d = 7.8666 7.7324 7.6531 7.5213 7.4480 7.3517 ```

Specify a second input to compute a specific number of the largest eigenvalues.

```d = eigs(A,3) ```
```d = 7.8666 7.7324 7.6531 ```

Compute the five smallest eigenvalues of a sparse matrix.

```A = delsq(numgrid('C',15)); d = eigs(A,5,'sm') ```
```d = 0.5520 0.4787 0.3469 0.2676 0.1334 ```

Create a 1500-by-1500 random sparse matrix with a 25% approximate density of nonzero elements.

```n = 1500; A = sprand(n,n,0.25); ```

Find the LU factorization of the matrix, returning a permutation vector `p` that satisfies `A(p,:) = L*U`.

```[L,U,p] = lu(A,'vector'); ```

Create a function handle `Afun` that accepts a vector input `x` and uses the results of the LU decomposition to, in effect, return `A\x`.

```Afun = @(x) U\(L\(x(p))); ```

Calculate the six smallest magnitude eigenvalues using `eigs` with the function handle `Afun`. The second input is the size of `A`.

```d = eigs(Afun,1500,6,'sm') ```
```d = 0.1423 + 0.0000i 0.4859 + 0.0000i -0.3323 - 0.3881i -0.3323 + 0.3881i 0.1019 + 0.5381i 0.1019 - 0.5381i ```

`west0479` is a real-valued 479-by-479 sparse matrix with both real and complex pairs of conjugate eigenvalues.

Load the `west0479` matrix, then compute and plot all of the eigenvalues using `eig`. Since the eigenvalues are complex, `plot` automatically uses the real parts as the x-coordinates and the imaginary parts as the y-coordinates.

```load west0479 A = west0479; d = eig(full(A)); plot(d,'+') ```

The eigenvalues are clustered along the real line (x-axis), particularly near the origin.

`eigs` has several options for `sigma` that can pick out the largest or smallest eigenvalues of varying types.

Compute and plot three eigenvalues for each of the available options for `sigma`. (Since `A` is not symmetric, the `'la'`, `'sa'`, and `'be'` options are not available.)

```figure hold on sm = eigs(A,3,'sm'); plot(sm,'r*') lm = eigs(A,3,'lm'); plot(lm,'b*') sr = eigs(A,3,'sr'); plot(sr,'mo') lr = eigs(A,3,'lr'); plot(lr,'co') opts.p = 45; li = eigs(A,3,'li',opts); plot(li,'g^') si = eigs(A,3,'si',opts); plot(si,'k^') legend('Smallest magnitude SM','Largest magnitude LM','Smallest real SR',... 'Largest real LR','Largest imaginary LI','Smallest imaginary SI') xlabel('Real axis') ylabel('Imaginary axis') title('Six types of eigenvalues') ```

Create a symmetric positive definite sparse matrix.

```A = delsq(numgrid('C', 150)); ```

Compute the six smallest algebraic eigenvalues using `'sa'`, which employs a Krylov method using `A`.

```tic d = eigs(A, 6, 'sa') toc ```
```d = 0.0013 0.0025 0.0033 0.0045 0.0052 0.0063 Elapsed time is 378.648088 seconds. ```

Compute the same eigenvalues using `'sm'`, which employs a Krylov method using the inverse of `A`.

```tic dsm = eigs(A, 6, 'sm') toc ```
```dsm = 0.0063 0.0052 0.0045 0.0033 0.0025 0.0013 Elapsed time is 11.244197 seconds. ```

The eigenvalues are clustered near zero. The `'sa'` computation struggles to converge using `A` since the gap between the eigenvalues is so small. Conversely, the `'sm'` option uses the inverse of `A`, and therefore the inverse of the eigenvalues of `A`, which have a much larger gap and are therefore easier to compute.

Create a 1000-by-1000 matrix that is nearly symmetric.

```rng(3) U = orth(randn(1000)); d = randn(500,1); A = U*diag([d; d])*U'; ```

Preview the top-left corner of the matrix.

```A(1:6,1:6) ```
```ans = -0.0146 -0.0403 -0.0302 -0.0083 -0.0097 -0.0283 -0.0403 0.0943 0.0733 0.0231 0.0249 -0.0051 -0.0302 0.0733 0.0078 0.0417 0.0554 -0.0987 -0.0083 0.0231 0.0417 0.1296 -0.0622 0.0298 -0.0097 0.0249 0.0554 -0.0622 0.0300 -0.0316 -0.0283 -0.0051 -0.0987 0.0298 -0.0316 0.0410 ```

Calculate the six smallest magnitude eigenvalues of `A`.

```e = eigs(A,6,'sm') ```
```e = 0.0031 + 0.0000i 0.0031 + 0.0000i 0.0076 - 0.0000i 0.0076 + 0.0000i 0.0099 + 0.0000i 0.0099 + 0.0000i ```

`eigs` detects when a matrix is symmetric and uses a specific algorithm for that case. Since `A` is nearly symmetric, `eigs` does not use the symmetric algorithm and the eigenvalues contain small imaginary parts.

Use `A = (A+A')/2` to make the matrix symmetric and recalculate the six smallest magnitude eigenvalues.

```A = (A+A')/2; e = eigs(A,6,'sm') ```
```e = 0.0099 0.0099 0.0076 0.0076 0.0031 0.0031 ```

`A = delsq(numgrid('C',30))` is a symmetric positive definite matrix of size 632 with eigenvalues reasonably well-distributed in the interval (0 8), but with 18 eigenvalues repeated at 4.

Use the `eig` function to compute all 632 eigenvalues and the `eigs` function to compute the six largest and smallest magnitude eigenvalues.

```A = delsq(numgrid('C',30)); d = sort(eig(full(A))); dlm = eigs(A); dsm = eigs(A,6,'sa'); ```

Plot the results from `eig` and `eigs` for the six largest and smallest magnitude eigenvalues.

```subplot(2,1,1) plot(dlm,'k+') hold on plot(d(end:-1:end-5),'ks') hold off legend('eigs(A)','eig(full(A))') xlim([0.5 6.5]) subplot(2,1,2) plot(dsm,'k+') hold on plot(d(1:6),'ks') hold off legend('eigs(A,6,''sa'')','eig(full(A))','Location','SouthEast') xlim([0.5 6.5]) ```

The repeated eigenvalue at 4 must be handled more carefully. The call `eigs(A,20,4.0)` to compute 20 eigenvalues near 4.0 tries to find eigenvalues of `A - 4.0*I.` This involves divisions of the form `1/(lambda - 4.0),` where `lambda` is an estimate of an eigenvalue of `A`. As `lambda` gets closer to 4.0, `eigs` fails. You must use a value of `sigma` that is near but not equal to 4 to find those eigenvalues.

```sigma = 4 - 1e-6; D = sort(eigs(A,20,sigma)); ```

Compute the 20 eigenvalues closest to 4 using `eig`, and the 20 eigenvalues closest to 4 - 1e-6 using `eigs`.

```figure(2) plot(d(307:326),'ks') hold on plot(D,'k+') hold off legend('eig(A)','eigs(A,20,sigma)') title('18 Repeated Eigenvalues of A') ```

Create sparse random matrices `A` and `B` that both have low densities of nonzero elements.

```B = sprandn(1e3,1e3,0.001) + speye(1e3); B = B'*B; A = sprandn(1e3,1e3,0.005); A = A+A'; ```

Find the Cholesky decomposition of matrix `B`, using three outputs to return the permutation vector `s` and test value `p`.

```[R,p,s] = chol(B,'vector'); p ```
```p = 0 ```

Since `p` is zero, `B` is a symmetric positive definite matrix that satisfies `B(s,s) = R'*R`.

Calculate the six largest magnitude eigenvalues and eigenvectors of the generalized eigenvalue problem involving `A` and `R`. Since `R` is the Cholesky factor of `B`, specify `opts.cholB = true`. Furthermore, since `B(s,s) = R'*R` and thus `R = chol(B(s,s))`, use the permutation vector `s` to specify `opts.permB = s`.

```opts.cholB = true; opts.permB = s; [V,D,flag] = eigs(A,R,6,'lm',opts); flag ```
```flag = 0 ```

Since `flag` is zero, all of the eigenvalues converged.

## Input Arguments

collapse all

Input matrix, specified as a square matrix. `A` is typically, but not always, a large and sparse matrix.

If `A` is symmetric, then `eigs` uses a specialized algorithm for that case. If `A` is nearly symmetric, then consider using `A = (A+A')/2` to make `A` symmetric before calling `eigs`.

Data Types: `double`
Complex Number Support: Yes

Input matrix, specified as a square matrix of the same size as `A`. When `B` is specified, `eigs` solves the generalized eigenvalue problem `A*V = B*V*D`. If necessary, `eigs(A,[],sigma,...)` indicates the standard eigenvalue problem `A*V = V*D`.

If `B` is symmetric, then `eigs` uses a specialized algorithm for that case. If `B` is nearly symmetric, then consider using `B = (B+B')/2` to make `B` symmetric before calling `eigs`.

Data Types: `double`
Complex Number Support: Yes

Number of eigenvalues to compute, specified as a positive scalar integer.

Example: `eigs(A,2)` returns the two largest eigenvalues of `A`.

Type of eigenvalues, specified as one of these values:

sigmaRequirementsDescription

scalar (real or complex, including 0)

The eigenvalues closest to the number `sigma`.

`'lm'` (default)

Largest magnitude.

`'sm'`

Smallest magnitude. Same as `sigma = 0`.

`'la'`

Works with problems that have real symmetric `A` and, if specified, symmetric positive-definite `B`.

Largest algebraic

`'sa'`

Smallest algebraic

`'be'`

Both ends (one more from high end if `k` is odd)

`'lr'`

Works with nonsymmetric and complex problems.

Largest real part

`'sr'`

Smallest real part

`'li'`

Largest imaginary part

`'si'`

Smallest imaginary part

Example: `eigs(A,k,1)` returns the `k` eigenvalues closest to 1.

Example: `eigs(A,k,'sm')` returns the `k` smallest magnitude eigenvalues.

Options structure, specified as a structure containing one or more of the fields in this table. Unset fields have a default value indicated by `{}` in the last column.

Option FieldDescriptionValues
`opts.issym`

Set to `1` if matrix `A` is symmetric. If using `Afun`, this option specifies whether `A-sigma*I` or `A-sigma*B` is symmetric.

`[{0} | 1]`
`opts.isreal`

Set to `1` if `A` is real. If using `Afun`, this option specifies whether `A-sigma*I` or `A-sigma*B` is real.

`[0 | {1}]`
`opts.tol`

Specifies the convergence tolerance, which is compared to the Ritz estimate residual:

```RitzRes <= tol*norm(A)```

`[scalar | {eps}]`
`opts.maxit`

Maximum number of iterations.

`[integer | {300}]`
`opts.p`

Number of Lanczos basis vectors. The recommended value is `p >= 2*k`, or for real nonsymmetric problems, ```p >= 2*k+1```. If you do not specify a `p` value, then the default algorithm uses at least `20` Lanczos vectors.

`p` must satisfy ```k < p <= n``` for real symmetric problems, and ```k+1 < p <= n``` otherwise.

 Note:   If `eigs` returns a warning message about not converging, then increasing the value of `p` can help. However, increasing `p` too much can cause memory issues.
`[integer | {2*k}]`
`opts.v0`

Starting vector.

```[n-by-1 vector | {vector randomly generated by rand}]```

`opts.disp`

Diagnostic information display level. `0` is off, `1` is some, `2` is all.

`[{0} | 1 | 2]`
`opts.cholB`

Set to `1` if `B` is actually the Cholesky factor `chol(B)`.

Do not use this option if `sigma` is `'sm'` or a numeric scalar.

`[{0} | 1]`
`opts.permB`

Specify the permutation vector `permB` if sparse `B` is really `chol(B(permB,permB))`.

Do not use this option if `sigma` is `'sm'` or a numeric scalar.

`[permB | {1:n}]`

Example: `opts.issym = 1, opts.tol = 1e-10` creates a structure with values set for the fields `issym` and `tol`.

Data Types: `struct`

Matrix function, specified as a function handle. The function ```Y = Afun(x)``` must return the proper value depending on the `sigma` input:

• `A*x` — If `sigma` is unspecified or anything other than `'sm'`.

• `A\x` — If `sigma` is `0` or `'sm'`.

• `(A-sigma*I)\x` — If `sigma` is a nonzero scalar (for standard eigenvalue problem).

• `(A-sigma*B)\x` — If `sigma` is a nonzero scalar (for generalized eigenvalue problem).

For example, the following `Afun` works when calling `eigs` with `sigma = 'sm'`:

```[L,U,p] = lu(A,'vector'); Afun = @(x) U\(L\(x(p))); d = eigs(Afun,100,6,'sm')```

For a generalized eigenvalue problem, add matrix `B` as follows (`B` cannot be represented by a function handle):

`d = eigs(Afun,100,B,6,'sm')`

`A` is assumed to be real and nonsymmetric unless `opts.isreal` or `opts.issym` specify otherwise. For information on how to provide additional parameters to the `Afun` function, see Parameterizing Functions.

Data Types: `function_handle`

Size of square matrix `A` that is used by `Afun`, specified as a positive scalar integer.

Example: `eigs(Afun,100000)`

## Output Arguments

collapse all

Eigenvalues, returned as a column vector.

Eigenvectors, returned as a matrix. The columns in `V` correspond to the eigenvalues along the diagonal of `D`.

Eigenvalue matrix, returned as a diagonal matrix with the eigenvalues on the main diagonal.

Convergence flag, returned as a scalar. A value of `0` indicates that all the eigenvalues converged. Otherwise, not all of the eigenvalues converged.

collapse all

### Tips

• Unless you provide a start vector with `opts.v0`, the default start vector is generated by `rand`, possibly leading to different iterations each run and perhaps even different convergence behavior. To control this, specify your start vector via `opts.v0`. Alternatively, you can set the random number generator state using `rng` before calling `eigs` so that `rand` produces the same random numbers across runs.

• Using `eigs` is not the most efficient way to find a few eigenvalues of small, dense matrices. If the problem fits into memory, it might be quicker to use `eig(full(A))`. For example, finding three eigenvalues in a 500-by-500 matrix is a relatively small problem that is easily handled with `eig`.

• `eigs` does not always return sorted eigenvalues and eigenvectors. Use `sort` to explicitly sort the output eigenvalues and eigenvectors in cases where their order is important.

• If `eigs` fails to converge for a given matrix, increase the number of Lanczos basis vectors by increasing the value of `opts.p`. As secondary options, adjusting the maximum number of iterations, `opts.maxit`, and the convergence tolerance, `opts.tol`, can also help with convergence behavior.

## References

[1] Lehoucq, R.B. and D.C. Sorensen, "Deflation Techniques for an Implicitly Re-Started Arnoldi Iteration." SIAM J. Matrix Analysis and Applications. Vol. 17, 1996, pp. 789–821.

[2] Sorensen, D.C., "Implicit Application of Polynomial Filters in a k-Step Arnoldi Method." SIAM J. Matrix Analysis and Applications. Vol. 13, 1992, pp. 357–385.