Calculating parts of the sparse matrix inverse

I have a sparse symmetric matrix H, containing on average 4 non-zero elements per row/col, and a sparse vector M with only two non-zero elements, +1 and -1, in positions t and f.
x is the values of the elements in H(i,j).
n = 3000;
H = sparse(i,j,x,n,n);
M = sparse([t f],1,[1 -1],n,1);
I want to find X = M'*H^-1*M. The lu-factors of H are computed earlier.
I know the following works:
X = M' * (U \ (L \ (P \ M)));
However, I want to go from right to left, in such a way that only two colums of P are "visited" in the calculation of (P \ M), and similar for (L \ ( P \ M)).
Is there a way of doing this "manually"?
Thank you!

Answers (1)

Matt J
Matt J on 26 Apr 2013
Edited: Matt J on 26 Apr 2013
However, I want to go from right to left, in such a way that only two colums of P are "visited" in the calculation of (P \ M), and similar for (L \ ( P \ M)).
Well, the calculation of P\M has little to do with the columns of P. It's the rows that matter. In particular, since P is a permutation matrix, remember that inv(P)=P.'. So, if you wanted to optimize y=P\M, you could instead do
y=(P(t,:)-P(f,:)).';
As for L\y and U\L\y, I don't think there's much you can do to optimize those stages. L and U have no simplified inverse and also the sparsity of U,L, and y are aready taken into account by mldivide.
x = A(t,f); % Not good for sparse matrices
No idea why you think this is "not good". FIND will not be better, though.

6 Comments

Matt J
Matt J on 26 Apr 2013
Edited: Matt J on 26 Apr 2013
As for L\y and U\L\y, I don't think there's much you can do to optimize those stages.
On second thought, you should use LINSOLVE to solve those stages. LINSOLVE lets you tell the solver that L and U are lower/upper triangular, so it can exploit that. I would assume it's also smart enough to exploit sparsity, where possible.
Thank you!
I was looking for the
y=(P(t,:)-P(f,:)).';
As far as I can see, that is the so-called "fast-forward" substitution. According to some PhD theses I've read, there is a corresponding "fast-backwards" substitution, (but I've not seen it explained so I believe it should be pretty similar, (or else it would have been explained)).
Doesn't mldivide exploit the upper/lower triangular structure and use it where it can (forward/backward substitution)?
Also, I get an error message saying: "LINSOLVE is currently not supported for sparse inputs" (MATLAB R2013a).
Matt J
Matt J on 26 Apr 2013
Edited: Matt J on 26 Apr 2013
Doesn't mldivide exploit the upper/lower triangular structure and use it where it can (forward/backward substitution)?
Not sure. At the very least, it would have to pre-analyze the matrix to see if it is triangular, whereas linsolve doesn't have that overhead. From "doc linsolve":
linsolve is faster than mldivide, because linsolve does not perform any tests to verify that A has the specified properties
Also, I get an error message saying: "LINSOLVE is currently not supported for sparse inputs" (MATLAB R2013a).
Too bad... Well, experimentation shows me that L\y is orders of magnitude faster than H\M, so presumably you're getting some benefit from the LU pre-factorization. I vaguely wonder if mldivide always does an initial lu-factorization. If so, it's incurring a little bit of redundant overhead in L\y when lu-factorizing L, but nothing crippling, obviously.
Thanks again Matt!
I'm doing these calculations thousands of times, the location of the elements of M changes, thus speed is essential. This is what I want to achieve (copy from article):
"When the independent vector is sparse, relative few of the columns of L have to be operated on. This is the so-called "fast-forward" substitution. ... If the solution is required for only a few network nodes, a "fast backward" substitution process can be carried out, operating on only a few of the rows of U."
I'm doing these calculations thousands of times, the location of the elements of M changes, thus speed is essential.
If only M is changing, but not H, then the optimal way to calculate X is to create a matrix MM whose columns are the different M. Then you can process all M in a single call to mldivide
X=sum( MM.*(H\MM) ,1)
This is advantageous because then mldivide doesn't have to refactor H for all the different M. See,
Also, mldivide is multithreaded, so it is likely that the processing of different M will be split into parallelized tasks.
When the independent vector is sparse, relative few of the columns of L have to be operated on.
It's not true, which could account for the lack of a reference or explanation in the article. I'm assuming, of course, that L has no special structure that you haven't mentioned. To see that it's not true for general L, consider this example
z=rand(1,n-1);
M=[1;zeros(n-2,1);-1];
L=eye(n)+diag(-z,-1);
You can readily verify that the solution to L*y=M, for any given vector z, is
y=[1;cumprod(z(:))]-[zeros(n-1,1);1];
The solution therefore involves all of the columns of L through cumprod(z).
That's not to say that the sparsity of L cannot be exploited, and mldivide certainly does so, but the inversion in general can involve all of L's columns.
I'm sure the articles are correct.
I believe I have missed some special structure that would make this possible. I just noticed the sentence: "The key concept is the "factor path" described in (Sparse Vector Methods, Tinney et. al, 1985)."
I haven't had the time to explore it yet, but hopefully the key to the solution is there.

Sign in to comment.

Categories

Asked:

on 26 Apr 2013

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!