Solving tridiagonal Matlab matrix using Gauss siedel ( can't use other method )

22 views (last 30 days)

Hi all, I want to solve this tridiagonal 2560×2560 sparse matrix using Gauss siedel iterative method,but it is taking forever to solve (3 days and counting). Please advice how this could be solved faster. Thanks so much in advance for your help.

Accepted Answer

John D'Errico
John D'Errico on 26 Feb 2024
Edited: John D'Errico on 27 Feb 2024
Why in the name of god and little green apples are you using Gauss-Seidel for this? (I can only assume this is a homework assignment, intended to show you why some iterative solvers need not always converge.) If that is the case, then not getting a solution is not a problem. WTP?
A direct solve using backslash would probably take less than a second, far faster if you created the matrix as a sparse one. And, since you DID create it as sparse, WHY? WHY? WHY?
n=2560;
e=ones(n,1);
A=spdiags([-1*e 2*e -1*e],-1:1,n,n);
A(1,1)=3;
A(n,n)=3;
% A=full(A); % WHY MAKE IT A FULL MATRIX?????????
B=repelem([210,10,1010],[1,n-2,1])';
x = A\B;
timeit(@() A\B)
ans = 4.8535e-05
So a TINY fraction of a second to perform the solve.
Note that some matrices cannot be solved using Gauss-Seidel. This is why other iterative solvers are employed. Here, an iterative solver is just insane though. Far too small a problem to bother with one.
Should Gauss-Seidel converge at all on this problem?
full(A(1:10,1:10))
ans = 10x10
3 -1 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 -1 2 -1 0 0 0 0 0 0 0 0 -1 2
Gauss-Seidel converges for strictly diagonally dominant problems, but also for symmetric positive definite matrices.
Diagonal dominance is not the case here, though G-S can be applied to some other problems that are not strictly diagonally dominant, where it may fail, or be extremely slowly convergent. (That is surely the case here.) As such, I am not even remotely surprised that it is not converging for you as fast as you want. As far as being positive definite, A is indeed so, but barely that. (A quick check on the spectral radius, and I get 0.999999247009037, so close to 1 that Gauss-Seidel will probably never converge in any reasonable time.)
What should you do to make it converge faster? Don't use Gauss-Seidel. Use backslash. Yeah, I know, you can't. ARGH.
x = A\B;
norm(A*x - B)
ans = 3.4727e-08
You might play around with successive over-relaxation or other (better) iterative solvers, but I would guess it won't help that much here. Yes, it will be faster, but this is a bad problem for iterative methods. A quick test showed that SOR, using even the optimal parameter, was not even close to convergence after a half milllion iterations.
So again, the answer is your code was not expected to converge.
  2 Comments
John D'Errico
John D'Errico on 27 Feb 2024
Let me expand on my answer. Gauss-Seidel will converge (IN THEORY) if the matrix is STRICTLY diagonally dominant, or if the matrix is symmetric, positive definite.
But convergence in theory is not always a relevant concept. For example, the trig series for sin(x) is globally convergent. It converges in theory, for ANY value of x. The problem is, that fails when you try to compute sin(x) using double precision arithmetic for large values of x. For example, sin(100) will yield garbage IF you use the standard series, because of numerical problems. Only if you were to use a huge number of digits in your computations, AND you use a rather large nmber of terms in that series will you see success. In double precision however, it will fail.
This problem is similar in a sense, though of course different since this is about linear algebra. Suppose adequate convergence takes 1e100 iterations, something you could not perform before the universe undergoes heat death. The mathematics would say that convergence is possible in THEORY, but practically, you would be wasting your time if you would be so long dead that you will not care about the answer.
This is what you need to understand. Anyway, you were surely assigned this problem to teach you that not all solvers need always converge. It is something that some teachers do, give you an impossible problem. My hope is you have learned that lesson, but I think based on your frantic e-mails to me, you don't believe me.
John D'Errico
John D'Errico on 27 Feb 2024
Edited: John D'Errico on 27 Feb 2024
I did some extra testing. For example, lsqr is normally a decent iterative solver. On this matrix however, I';ll compare what it does, versus the true solution as given by backslash.
n=2560;
e=ones(n,1);
A=spdiags([-1*e 2*e -1*e],-1:1,n,n);
A(1,1)=3;
A(n,n)=3;
% A=full(A); % WHY MAKE IT A FULL MATRIX?????????
B=repelem([210,10,1010],[1,n-2,1])';
xslash = A\B;
norm(A*xslash-B)
ans = 3.4727e-08
So backslash does solve the problem. Now, try lsqr on this difficult matrix.
norm(xslash - lsqr(A,B))
lsqr stopped at iteration 20 without converging to the desired tolerance 1e-06 because the maximum number of iterations was reached. The iterate returned (number 20) has relative residual 0.45.
ans = 3.0271e+08
Essentially, lsqr, normally a pretty decent iterative solver, fails miserably given the default arguments. But if I push things a bit...
[xlsqr,flag,relres,iter] = lsqr(A,B,1e-8,1000000);
norm(xslash - xlsqr)
ans = 0.2937
norm(A*xlsqr-B)
ans = 2.4924e-05
So it is getting there. But this is a problem chosen to be terribly difficult for even good iterative solvers to work well on. Gauss-Seidel pretty much gives up the ghost.
I think you were not expected to succeed here.

Sign in to comment.

More Answers (0)

Products


Release

R2023a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!