Why is the timing for decomposition and backsolve for M and M' so different?

7 views (last 30 days)
Question: Why is the timing for doing lu decomposition and back solve so different for M versus M'?
Background: I have a non-symmetric 377544 x 377544 sparse real valued matrix with the sparsity pattern shown in the spy plot below.
FM = decomposition(M);
Takes about 36 seconds, but
FadjM = decomposition(M');
Takes about 42 minutes!
On the other hand for F a 377544 x 12 real dense matrix,
test = FM \ F;
Takes about 23 seconds, but
test = FM' \ F;
Takes only 8.5 seconds.
If I try to decompose the transpose of M and then test the backsolves I find that
test = FadjM \ F;
takes about 300 seconds, while
test = FadjM' \ F;
also takes about 300 seconds.
Does anyone have any insights into what is going on? If anyone is interested in investigating more deeply, I'm happy to upload M.
Thank you,
Francois
  12 Comments
Francois
Francois on 10 Nov 2025

Thank you for the response. As I mentioned above, if I use [L, U, P, Q, R] = lu(M); or lu(M’) The difference in timing on my original matrix is less than a factor of 3, but decomposition(M’) takes about 70 times longer than decomposition(M). So my question is about the particular algorithm or implementation in Matlab’s decomposition function.

Torsten
Torsten on 10 Nov 2025
Edited: Torsten on 10 Nov 2025
Maybe it's best to address this question directly to the MATLAB developers:
We are only MATLAB users who don't have insight into implementation details.
Maybe @Christine Tobler can help.

Sign in to comment.

Accepted Answer

Christine Tobler
Christine Tobler on 10 Nov 2025
Thank you for including the smaller matrices to reproduce the issue!
It seems you're running into an edge case where we detect a likely ill conditioning and redo the decomposition in a safer but more expensive way.
We can see some of this by turning on diagnostic messages for the sparse solver using the spparms function. I will use backslash directly, since this does the same as decomposition but has more helpful diagnostic messages.
load MandF.mat
spparms("spumoni", 1);
M \ F;
sp\: bandwidth = 48875+1+48875. sp\: is A diagonal? no. sp\: is band density (0.000276) > bandden (0.500000) to try banded solver, or is the matrix tridiagonal? no. sp\: is A triangular? no. sp\: is A morally triangular? no. sp\: is A a candidate for Cholesky (symmetric, real positive or negative diagonal)? no. sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering. sp\: UMFPACK's factorization was successful. sp\: UMFPACK's solve was successful.
Mt = M'; % explicitly transpose M, since this is what happens in decomposition(M').
% (backslash would detect the ctranspose and compute the equivalent of
% decomposition(M)'\F).
Mt \ F;
sp\: bandwidth = 48875+1+48875. sp\: is A diagonal? no. sp\: is band density (0.000276) > bandden (0.500000) to try banded solver, or is the matrix tridiagonal? no. sp\: is A triangular? no. sp\: is A morally triangular? no. sp\: is A a candidate for Cholesky (symmetric, real positive or negative diagonal)? no. sp\: use Unsymmetric MultiFrontal PACKage with automatic reordering. sp\: UMFPACK's factorization was successful. sp\: However, UMFPACK may have failed to produce a correct factorization possibly due to aggressive pivot tolerances or because the problem is ill-conditioned. sp\: These tolerances were: Pivot Tolerance: 0.100000 Diagonal Pivot Tolerance: 0.001000 We will factor the matrix again but use standard partial pivoting. We will factor the matrix again but use standard partial pivoting. sp\: UMFPACK's solve was successful.
The messages for the second case indicate that backslash detects what it believes to be a too ill-conditioned decomposition, and wants to ensure no precision was lost in the decomposition. It reacts by increasing the pivot tolerance to 1, which matches what would be done for the dense case - safer, but more expensive.
Now why would the condition estimate be so different? The condition of A and A' is of course the same, but this is a cheap estimate based on the computed factors. Specifically, the magnitude of the diagonal elements of the factor U is used.
tic;
[L, U, P, Q, D] = lu(M);
toc
Elapsed time is 0.693204 seconds.
tic;
[Lt, Ut, Pt, Qt, Dt] = lu(Mt);
toc
Elapsed time is 1.872775 seconds.
format shortg
full([min(abs(diag(U))), max(abs(diag(U)))])
ans = 1×2
8.5362e-05 583.02
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
full([min(abs(diag(Ut))), max(abs(diag(Ut)))])
ans = 1×2
3.6294e-10 530.92
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Here's the reason for the discrepancy: The smallest diagonal value in absolute value is 1e-10 for M' but only 1e-5 for M. Note looking at the diagonal values of U doesn't actually give a good estimate for the condition number, it's just the cheapest thing that can give anything approaching such an estimate. The alternative of iteratively solving linear systems would slow down backslash considerably.
I think for the moment, you are best off using the LU command directly, as you mentioned above.
I will pass this case on to my colleagues to discuss it further.

More Answers (0)

Categories

Find more on Sparse Matrices in Help Center and File Exchange

Products


Release

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!