Why is 0.3 - 0.2 - 0.1 not equal to zero?

 Accepted Answer

Jan
Jan on 26 Dec 2012
Edited: Steven Lord on 14 Oct 2024
0.3 - 0-2 - 0.1 returns -2.7756e-17.
As is mentioned frequently in the newsgroup, some floating point numbers can not be represented exactly in binary form. So that's why you see the very small but not zero result. See EPS.
The difference is that 0:0.1:0.4 increments by a number very close to but not exactly 0.1 for the reasons mentioned below. So after a few steps it will be off whereas [0 0.1 0.2 0.3 0.4] is forcing the the numbers to their proper value, as accurately as they can be represented anyway.
a = [0 0.1 0.2 0.3 0.4];
b = 0:.1:.4;
as = sprintf('%20.18f\n',a)
>> as =
0.000000000000000000 % ==
0.100000000000000010 % ==
0.200000000000000010 % ==
0.299999999999999990 % ~= bs !
0.400000000000000020 % ==
bs = sprintf('%20.18f\n',b)
>> bs =
0.000000000000000000 % ==
0.100000000000000010 % ==
0.200000000000000010 % ==
0.300000000000000040 % ~= as !
0.400000000000000020 % ==
and:
format hex;
hd = [a.',b.']
>> hd =
0000000000000000 0000000000000000 % ==
3fb999999999999a 3fb999999999999a % ==
3fc999999999999a 3fc999999999999a % ==
3fd3333333333333 3fd3333333333334 % ~= !
3fd999999999999a 3fd999999999999a % ==
If you're trying to compare two floating-point numbers, be very careful about using == to do so. An alternate comparison method is to check if the two numbers you're comparing are "close enough" (as expressed by a tolerance) to one another:
% instead of a == b
% use:
areEssentiallyEqual = abs(a-b) < tol
% for some small value of tol relative to a and b
% perhaps defined using eps(a) and/or eps(b)
You can see this same sort of behavior outside MATLAB. Using pencil and paper (or a chalkboard, or a whiteboard, etc.) compute x = 1/3 to as many decimal places as you want. The number of decimal places must be finite, however. Now compute y = 3*x. In exact arithmetic, y would be exactly 1; however, since x is not exactly one third but is a rounded approximation to one third, y will not be exactly 1.
For a readable introduction to floating point arithmetic, look at Cleve's Corner article from 1996: Floating Points (PDF) http://www.mathworks.com/company/newsletters/news_notes/pdf/Fall96Cleve.pdf
For more rigorous and detailed information on floating point arithmetic, read the following paper: What Every Computer Scientist Should Know About Floating Point Arithmetic.
Another resource is Technical Note 1108 http://www.mathworks.com/support/tech-notes/1100/1108.html on the Support section of The MathWorks website.
This answer is copied and slightly modifed from matlab.wikia.com/wiki/FAQ: Why_is_0.3-0.2-0.1_not_equal_to_zero

6 Comments

Jan,
then what is the way of getting 0.3-0.2-0.1 = 0 in MATLAB? Or we have to accept -2.7756e-17?
What is the solution?
Thank you.
You can get closer using the symbolic toolbox:
n1 = sym(0.3)
n1 = 
n2 = sym(0.2)
n2 = 
n3 = sym(0.1)
n3 = 
n1 - n2 - n3
ans = 
0
Why do I say closer? Because sym() applied to a numeric value tries to find a sqrt() or pi() or rational ratio that closely matches the input values, and the approximations found are not certain to match up
sym(pi+1e-14)-sym(pi-1e-14)
ans = 
0
You would expect x+1e-14 - (x-1e-14) to be +2e-14 but you see here the result is 0, because both numeric values were approximated as pi.
sym(1.4142135623731) - sym(14142135623731)/sym(10)^13
ans = 
vpa(ans)
ans = 
You might expect that 1.4142135623731 - 14142135623731E-13 should be exactly zero, since they look like they represent the same thing down to the last decimal place, but the floating point version gets approximated as sqrt(2)
vpa(sym(pi/123456)*123456 - sym(pi))
ans = 
Knowing that the Symbolic Toolbox "often" converts fractions of pi into symbolic pi times a fraction, you might expect that the above comes out as exactly 0. But sym()'s ability to detect fractions of pi is not endless
vpa(sym(pi/12345)*12345 - sym(pi))
ans = 
0.0
but it is moderately useful.
So "There Ain't No Such Thing As A Free Lunch". No matter what representation you choose, there will be border cases that you would thing "ought" to work, but do not. Symbolic Toolbox often does what you would want MATLAB to do if indefinite precision were possible, but it has limitations.
@Rajani Metri: This is not a limitation of Matlab, but of the IEEE754 standard of storing number as floating point values in binary format. Any numerical software, which uses the double format show the same effect and this standard in implemented in all modern CPUs.
Of course there are other representations, e.g. the symbolic toolbox in Matlab (see Walter's useful and exhaustive explanation). But you have to pay a pricwe: The calculations can be much slower, a factor of 1000 is possible.
You can find some efficient quadrupel-precision libraries also. Beside the fact, that many of them are expensive, you need to test them thoroughly with your specific application. Many functions are optimized for the double-precision, e.g. stepsize controllers in gradient based optimization tools. This will not necessarily run reliably with changing the underlying data type.
Thefore the question "what is the way of getting 0.3-0.2-0.1 = 0 in MATLAB?" is equivalent to: "how to hit a nail in the wall using a drilling machine?" This is not the job Matlab is designed for. It is a fundamental detail of numerical maths to cope with this limitation and generations of scientists searched for efficient ways.
A software, which expects 0.3-0.2-0.1 to reply 0 has a design error. This is not a problem of the tool Matlab.
And of course, even if you use quad precision, you still have the same problem, just at a much smaller level. That quad precision tool still represents numbers in a binary form, when they have no finite binary representation. And that means you are still only approximating values like 0.3.
128 bit representation that did not change the range, the error would be about 6e-36, but not 0.
Updated the link to the Goldberg paper.

Sign in to comment.

More Answers (3)

Let me add my take on the problem.
Suppose we try to represent these numbers in a binary form? That is, represent 1/10 = 0.1 in decimal, but as a binary number? We must do that because all floating point numbers are stored in binary form. Even if decimal storage was used, we would still have problems. For example, does 2/3 - 1/3 == 1/3? Surely that must be true in decimal arithmetic?
Suppose we weree working in 10 digits of precision in a decimal arithmetic storage form. What would 1/3 look like?
X = 0.3333333333
Y = 0.6666666667
I've rounded both values to their closest approximation I can find in a decimal form, with only 10 digits after the point. Now Y-X will be:
Y - X = 0.6666666667 - 0.3333333333 = 0.3333333334
And that is not the same value as X. But you say, I should have used Y = 0.6666666666 instead, rounding down. Then we would have Y-X=X.
But then we must also have X+Y = 3/3 = 1. And if we had rounded Y down to make the last result work, then we would see:
X = Y = 0.3333333333 + 0.6666666666 = 0.9999999999
So there will always be some contradiction, as long as we are forced to use a finite decimal storage for numbers that have no finite representation in that base.
The same applies to any binary storage form. This is how doubles and singles are stored in MATLAB. A double uses 52 binary bits to store the number. MATLAB comes as close as it can, but only 52 bits represent the mantissa.
So what would the number 1/10 look like in binary? If we think of the binary bits like this:
0.00011001100110011001100110011001100110011001100110011...
That is...
1/10 = 2^-4 + 2^-5 + 2^-8 + 2^-9 + 2^-12 + 2^-13 + 2^-16 + 2^-17 + ...
TRY IT!
format long g
2^-4 + 2^-5 + 2^-8 + 2^-9 + 2^-12 + 2^-13 + 2^-16 + 2^-17
ans =
0.0999984741210938
I had to stop somewhere. If I add in a few more terms, we will come closer. In fact, the binary expansion that MATLAB uses for the number 1/10 is:
approx = sum(2.^[-4 -5 -8 -9 -12 -13 -16 -17 -20 -21 -24 -25 -28 -29 -32 -33 -36 -37 -40 -41 -44 -45 -48 -49 -52 -53 -55])
approx =
0.1
Which looks like 0.1 as displayed by MATLAB but is it? Is it EXACTLY 0.1?
sprintf('%0.55f',approx)
ans = '0.1000000000000000055511151231257827021181583404541015625'
sprintf('%0.55f',1/10)
ans = '0.1000000000000000055511151231257827021181583404541015625'
As you can see, both values are now seen to be the same. But neither is exaclty 0.1, only the closest approximation MATLAB could find for that number.
Similarly, we could try to approximate 0.2 and 0.3 as binary numbers, but again, we will fail as long as we are forced to use a finite number of binary bits in the approximation. And as we saw with the decimal examples before, we will always fail some of the time. Sometimes, things work. For example, try these two examples:
0.2 - 0.1 == 0.1
ans = logical
1
0.3 - 0.2 == 0.1
ans = logical
0
So one of those trivial mathematical identities seems to work, but the other fails. Again, the problem is, MATLAB can use only a finite number of bits to represent any number. And when those numbers are not representable exactly in a finite number of bits, we will aways SOMETIMES see a contradiction to what we expect must be true.
This does not happen all of the time. For example, what is the representation of the number 1/8 == 0.125 in MATLAB?
sprintf('%0.55f',1/8)
ans = '0.1250000000000000000000000000000000000000000000000000000'
So MATLAB gets that EXACTLY correct. The trick is, 1/8 is just a power of 2 itself. So it is exactly representable in a binary form. And that means we will see this ALWAYS work in MATLAB:
1/2 - 1/8 == 3/8
ans = logical
1
Simple positive or negative powers of 2 (and integer multiples of them) will be correctly represented, as long as a finite number of bits are sufficient to do the job. But 0.1, 0.2, and 0.3? While they are finitely representable as decimals, that is not the case in binary. And THAT is my take on why 0.3-0.2-0.1 is not equal to zero in MATLAB.
Walter Roberson
Walter Roberson on 29 Jun 2022
Edited: Walter Roberson on 9 Aug 2023
Consider any finite-length positional notation with a fixed base, B. For example, base 10 and 0.193 meaning 1 * 10^-1 + 9 * 10^-2 + 3 * 10^-3 -- or to put it another way, (1*10^2 + 9*10^1 + 3*10^0)/10^3 . 193 / 10^3 .
Consider a number between 0 and 1. Hypothesize that we can express it as a rational fraction, an integer N divided by M digits of the base, N / B^M . Now let the number be 1 divided by a number that is relatively prime to B, N/B^M = 1/P, with N an integer 0 to (B^M - 1) . For example, 3 is relatively prime to 10, so N/10^M = 1/'3 would be an example.
Now, cross multiply the denominator to get N = B^M / P . But our hypothesis is that B and P are relatively prime, so we know that B^M cannot be divided exactly by P.
Therefore, for any finite length M for fixed integer base B, there exist numbers (rational numbers even!) that cannot be exactly represented in the base. In the previous example, 1/3 cannot be exactly represented in any fixed length number of decimal digits; neither can 1/7 or 1/11 or 1/13 ...
Now, let the base be 2, and the relative prime be 10. N = 2^M/10 cannot work out, for any finite number of digits. Except for 2^(-infinity) there is no power of 2 that is exactly divisible by 10.
And therefore, there is no possible finite base-2 positional representation of 1/10 (or 1/100 or 1/1000). And so as long as you are using finite binary representation, 0.001 (base 10) can never exactly equal 1/1000 . So when you multiply 0.001 represented in finite positional binary by 350, you are never going to get exactly 350/1000 .
The question then becomes whether the value that you do get for 350*0.001 is the same approximation as you get for writing 0.350 . And the answer for that happens to be NO. And if it were the same, that would be by chance, and there would be different numbers that failed to work out.
Given any particular rounding system, even given any fixed number of extra "guard" digits for multiplication, you can show that as long as you are using a finite positional integer base system, that there will be cases like this, where the rounded representations will not be equal after a multiplication.
I am emphasizing that this is not a MATLAB bug: this is an inherent problem for every finite positional integer-base number system.
You could reduce problems if you immediately switch everything to indefinite-precision rational numbers and carry out the calculations as rationals, but (A) this would require growing amounts of memory as you went through the calculations; and (B) it would not completely solve the problems anyhow. (For example, if the user wrote 0.3333333333 then were they "intending" to write the rational 1/3, or were they "intending" to write the rational 3333333333/10000000000 ?)
MATLAB chose finite binary representation because that is what your computer hardware uses.
Starting in release R2024b, you can use the isapprox function to determine if two values are approximately equal to within a tolerance.
x = 0.2 + 0.1;
y = 0.3;
isapprox(x, y)
ans = logical
1
isapprox allows you to specify tolerances using one of four predefined tolerance levels or specific absolute and/or relative tolerance levels numerically. If you don't specify a tolerance (as with the example above) a 'verytight' tolerance of 1e-15 is used.
A = 1;
B = A + 1e-10;
isapprox(A, B, 'tight') % Tolerance of 1e-12
ans = logical
0
isapprox(A, B, AbsoluteTolerance = 1e-11)
ans = logical
0
isapprox(A, B, AbsoluteTolerance = 1e-9)
ans = logical
1
isapprox(A, B, 'loose') % Tolerance of 1e-8
ans = logical
1

1 Comment

It's convenient that it will adjust the named tolerance presets based on the class.

Sign in to comment.

Asked:

Jan
on 26 Dec 2012

Commented:

on 14 Oct 2024

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!