Different output using mldivide with newer matlab version

Problem introduction
I have a simple system of 4 linear equations with 4 unknowns, which can be written as . The content of the matrices is written out below.
The analytical solution to this equation is
Observation
Asking MATLAB R2017b to solve this system using mldivide returns the exact answer. However, MATLAB R2024a returns a slightly different answer for the exact same question, where 3 out of the 4 elements in x are off to the order of 1e-11.
This exact observation can be reproduced using the attached files.
The run in R2017b (err is the difference between the mldivide solution and the analytical solution):
>> test
x =
1.0e+02 *
0.010000000000000
0
-2.865785081173561
0.687432906991598
err =
0
0
0
0
The run in R2024a:
>> test
x =
1.0e+02 *
0.010000000000097
0.000000000000000
-2.865785081173561
0.687432906991647
err =
1.0e-11 *
0.972688596334592
0.004821862926227
0
0.484590145788388
Question
Simply; how come?
Is it simply a numerical error? What is the difference in mldivide between R2017b and R2024a? I read in the release notes that there is an Improved performance with small matrices to mldivide in R2022b. Is this the cause?
And how can I overcome this issue? A lot of my test functions are failing; is there a proper, robust fix for this? Or should I take a close look at my failing test functions and update them if possible? Any other suggestions?

3 Comments

I thought by including my test files the issue would be reproducible. I've updated my question to include the test results.
You're hinting at the fact that tiny differences simply exist in numerical computations, and while 1e-11 is indeed tiny, I find it strange that this occurs in R2024a and not in R2017b. If R2017b already showed numerical errors, I wouldn't be bothered and I would apply some tolerances of the order 1e-10 to my problem. But since R2017b was originally correct but R2024a is simply not, I'd like to know more about this case. Mind you, it's not the only case.
Someone who has access to both versions (R2017b and R2024a) will have to respond to you about this. I don°t, so I deleted my answer.
Hi Steven H,
Similar discussion at this Question that may be of interest.

Sign in to comment.

Answers (1)

We do not guarantee that the results from mldivide will be exactly the same, down to the last bit, across:
  • different operating systems running the same release of MATLAB (potential differences in system math libraries)
  • different releases of MATLAB running on the same machine (bug fixes in MATLAB or different versions of certain math libraries included in MATLAB, which could be for bug fixes in those libraries or performance improvements or enhanced accuracy)
  • the same release of MATLAB running on the same machine with different math libraries (Intel MKL versus AMD AOCL, for example.)
You can check which libraries you're using with the command version -lapack (or for the BLAS, version -blas) as shown in this Answers post.
"A lot of my test functions are failing; is there a proper, robust fix for this?"
Yes. Don't have your test functions test for exact down to the last bit equality of results unless you have a good reason to expect or require that level of agreement (1+1 must always equal exactly 2 regardless of MATLAB / OS / library version, for example! Another is running the same computation twice in a row in the same MATLAB session as long as no random numbers are involved.)
Instead, specify a tolerance. If you're using the unit testing framework included in MATLAB (specifically verifyEqual) you can specify an AbsTol (absolute tolerance) and/or RelTol (relative tolerance). As an example, should give exactly 0. But since the MATLAB function pi does not return the transcendental value of π but the double precision approximation, it's close to but not exactly equal to 0. So the exact, down-to-the-last-bit test fails.
tc = matlab.unittest.TestCase.forInteractiveUse;
verifyEqual(tc, sin(pi), 0) % fails
Verification failed. --------------------- Framework Diagnostic: --------------------- verifyEqual failed. --> The numeric values are not equal using "isequaln". --> Failure table: Actual Expected Error RelativeError ____________________ ________ ____________________ _____________ 1.22464679914735e-16 0 1.22464679914735e-16 Inf Actual Value: 1.224646799147353e-16 Expected Value: 0 ------------------ Stack Information: ------------------ In /tmp/Editor_utzhl/LiveEditorEvaluationHelperEeditorId.m (LiveEditorEvaluationHelperEeditorId) at 2 In /MATLAB/toolbox/matlab/connector2/interpreter/+connector/+internal/fevalMatlab.p (fevalMatlab) at 0 In /MATLAB/toolbox/matlab/connector2/interpreter/+connector/+internal/fevalJSON.p (fevalJSON) at 0
But if we specify a tolerance, it's "close enough" to 0.
verifyEqual(tc, sin(pi), 0, AbsTol = eps) % passes
Verification passed.
Or in this particular case, to avoid the difference between pi and π you could use sinpi.
verifyEqual(tc, sinpi(1), 0) % no tolerance, passes
Verification passed.

4 Comments

I understand that tiny numerical differences may occur between different versions of MATLAB, but I also think that a difference of 1e-11 on such a basic problem is rather large. There is an exact solution and MATLAB "used to" be able to find it. I was hoping there was more to this...
NO. That is NOT a large difference. I think you do not understand the linear algebra here.
load data
A
A = 4x4
-0.0011 0 -1.0000 0 0 0 -1.0000 0 0.4982 0 0 -1.0000 0 100.4985 0 -1.0000
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
b
b = 4x1
286.5775 286.5785 -68.2451 -68.7433
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Look at your matrices.
rank(A)
ans = 4
cond(A)
ans = 1.5081e+05
A is not singular, but it has a condition number of 1.5e5. That is not huge, but it tells us to what extent we expect it to amplify any least significant bit noise in the problem. I would expect to see the solution to have trash in it, roughly on the order of
eps(cond(A)*norm(b))
ans = 7.4506e-09
So in context, 1e-11 was pretty good, a factor of 10 smaller than my worst fears. The difference you saw IS tiny, and actually better than it could have been.
MATLAB would never have used to find the exact solution, because an exact solution to that problem does not exist in double precision arithmetic. And MATLAB has ALWAYS used double precision arithmetic.
An exact solution exists ONLY if you are using symbolic tools. And there we see the solution you expected.
syms b1 b2 b3 b4 A42
A = [b1-b2 0 -1 0;0 0 -1 0;b3-b4 0 0 -1;0 A42 0 -1];
A\[b1;b2;b3;b4]
ans = 
Sometimes however, even double precision arithmetic can get lucky. That may have been what happened for you in the old release. But there would never have been any expectation that that older release would always get lucky on such a problem, because it used double precision arithmetic!
FYI, from the Wikipedia entry for condition number: "As a rule of thumb, if the condition number , then you may lose up to k digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods."
You might also find section 9 (Norms and Condition Numbers) of the chapter "Linear Equations" in Cleve Moler's Numerical Computing with MATLAB textbook interesting.
With your explanation @John D'Errico I am convinced that it's not a matlab issue but a general numerical computation issue. I will find a suitable corresponding solution. If I could somehow accept your answer, I would. The answer of @Steven Lord was helpful, but not as much.

Sign in to comment.

Asked:

on 15 May 2024

Commented:

on 21 May 2024

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!