vpa(4503599627370491.5)

vpa(4503599627370491.5) produces 4503599627370496.0 in 2017b. Why?
Further, sym(4503599627370491.5)-sym(4503599627370496) produces 0.

Answers (5)

Walter Roberson
Walter Roberson on 12 Feb 2018

3 votes

sym and vpa are normal functions. Their arguments are evaluated with the normal MATLAB rules before the function is called. Double precision losses information on a number that large before it gets passed on.
You should enclose large numbers in quotes for use with vpa or syms.
Note: the use of sym on character vectors representing numeric constants will continue to be supported, as will sym on a character string representing a simple variable name. The functionality being discontinued within a small number of releases is using sym() on a character string representing an expression. sym() is defined as treating the character strings as MuPad expressions and the replacement functionality str2sym will instead treat it as MATLAB syntax.
Jon Dattorro
Jon Dattorro on 7 Sep 2023

1 vote

Advanpix Multiprecision Computing Toolbox (MCT) is far superior to Matlab Variable Precision Arithmetic (VPA) which produces incorrect answers and erroneous results in many different circumstances. This holds true from my own experience and numerical experiments. There is some discussion of that in their Forum here:
I wish MCT were integrated into Matlab because there would be significant improvements to its speed of execution by doing so. It is certainly faster than VPA which requires the Symbolic Math Toolbox.
Mathworks needs to ignore their internal NIH pride and embrace the groundbreaking achievement of Pavel Holoborodko who first introduced MCT in 2011.
Jon Dattorro

11 Comments

The MCT user manual contains fundamental misunderstandings of MATLAB, so I would not trust anything it says about MATLAB without further investigation.
This answer doesn't really address the Question. Nevertheless, here is an example from the link above:
format longg
A = rand(3);
A
A = 3×3
0.00675007445819154 0.738285744821583 0.682382286857692 0.289482492634215 0.614525723020258 0.996567484817147 0.941224890008588 0.636209659446725 0.439525954043589
digits(25)
diff(A)
ans = 2×3
0.282732418176024 -0.123760021801325 0.314185197959455 0.651742397374373 0.0216839364264665 -0.557041530773558
vpa(A)
ans = 
diff(vpa(A))
ans = 
Hi Walter,
Could you provide an example of misunderstanding Matlab? I'm sure I could persuade Pavel to revise the manual.
I've been applying Advanpix MCT successfully since February 2018 because VPA wasn't doing what I needed back then. In my opinion, the principal reason Advanpix MCT works so well is because Pavel has deep understanding of Matlab. Indeed, whenever Mathworks issues a new release, there can sometimes be significant adjustments from Advanpix to ensure compatibility.
I urge you to investigate MCT further. It think that after working with it for a while, you might become a proponent. It's also lots of fun, if you like to observe quantization artifacts brought about by floating point arithmetic.
Jon Dattorro
Which continues on to claim that the MATLAB answer is wrong for showing exact zeros. However, vpa(A) is returning sym() objects, and the diff() method of sym() objects is derivative -- so MATLAB is doing exactly what is documented .
The part of the page claiming that diff(vpa(A)) should be returning extended precision numeric differences is displaying a fundamental misunderstanding of how object oriented programming works.
I wrote the company a message of complaint about that section.
Walter,
If you still have your "message of complaint", please post it in their Forum here:
Someone from Advanpix will clear this up.
Jon Dattorro
Dear Walter,
Thank you for your email/complaint. We already replied to your message in private.
But, as the discussion became public, I think it is Ok to publish our reply here.
I cannot publish your email without your permission, so I include only my reply (text between """..."""):
"""
Thank you very much for your comment.
Please be sure, we have a solid understanding of OOP, especially in relation to the MATLAB language. Our toolbox uses OOP extensively, including function overloading, etc. etc.
You are absolutely right about the different meaning of "diff" regarding symbolic expressions. I agree 100%.
However, I cannot agree with you on "expectation" and on how "vpa/diff" should work on numerical inputs.
(1) The original "diff" was one of the first functions introduced in MATLAB with very clear semantics - numeric approximation of the derivatives.
(2) The "VPA" is literally claimed to be "Variable-precision arithmetic (arbitrary-precision arithmetic)" in MATLAB's documentation.
(3) Arbitrary precision is particularly useful for estimating derivatives numerically (as this is a very ill-conditioned problem).
(4) Most vpa-enabled functions check the input type and apply numeric algorithms for numerical inputs. Symbolic inputs treated differently with symbolic-aware algorithms.
Combining the (1)-(4), it is natural to expect that overloaded "vpa/diff" would follow the original meaning and produce a numerical derivative for numerical inputs using extended precision.
The 10+ years of interaction with toolbox users only proves this.
Even prominent researchers, contributing to MATLAB code and close to the team - express this expectation.
In my opinion, to fix this ambiguous situation, the MATLAB team needs to introduce another function for symbolic derivatives (e.g. symdiff).
The "diff" needs to be reserved for numerics, as this semantic has been engraved into users' perception for decades.
***
We will add changes to our documentation regarding the "diff" function (already done).
Thank you,
Pavel.
"""
I can only add, user-accesible documentation for "vpa/diff" method is non-existent. Neither "help vpa.diff", nor "doc vpa.diff" returns meaningful results. The only option for user is to find the actual overloaded code and read the comments. Compare this with function from our toolbox:
>> help mp.diff
diff Difference and approximate derivative.
Usage is identical to built-in function diff.
See also gradient, sum, prod.
To sum up, the sematic of "vpa/diff" function is poorly defined for numeric inputs. This leads to confusion. It has nothing to do with how OOP aspects implemented in MATLAB (we think it is far from pefect, and even published post about it many years ago).
To clear all the confusion, the behavior of "vpa/diff" on numeric inputs must be clearly outlined in the documentation.
Walter,
Although there are strong arguments on both sides, we might agree that Matlab diff() was overloaded when ported to the Symbolic Toolbox; that its expected operation were subject to misinterpretation by experts, should serve as evidence of that. Pavel has consequently removed references to diff() from the Advanpix User Manual. I think he should be applauded for that gracious gesture.
Moving forward, kindly cite any outstanding ''fundamental misunderstandings of MATLAB'' in the Advanpix User Manual. My interest is in discovery of which obstacles might prevent future integration of the Multiprecision Computing Toolbox into Matlab.
Jon Dattorro
Please indicate anywhere in any Mathworks documentation that creates a separate class for symbolic numbers that is distinct from sym . In particular, as Pavel expects help vpa/diff to work, please cite anywhere in the documentation that indicates that a class or package named vpa is created.
A = vpa(4503599627370491.5)
A = 
4503599627370496.0
class(A)
ans = 'sym'
methods(A)
Methods for class sym: abs combine erfi hypergeom laplace perms sinh acos compose erfinv hypot laplacian permute sinhint acosd cond euler ifourier latex piecewise sinint acosh conj eval igamma lcm pinv sinpi acot cos exp ihtrans ldivide plus size acotd cosd expand ilaplace le pochhammer solve acoth cosh expint imag legendreP poles sort acsc coshint expm in length poly sqrt acscd cosint ezcontour incidenceMatrix lhs poly2sym sqrtm acsch cospi ezcontourf inline limit polylog ssinint adjoint cot ezmesh int linsolve polynomialDegree string airy cotd ezmeshc int16 log polynomialReduce subexpr all coth ezplot int32 log10 potential subs allfinite csc ezplot3 int64 log2 power sum and cscd ezpolar int8 logical powermod svd angle csch ezsurf integrateByParts logint pretty sym any ctranspose ezsurfc inv logm prevprime sym2cell argnames cumprod factor isAlways lt prod sym2poly asec cumsum factorIntegerPower isLowIndexDAE lu psi symFunType asecd curl factorial isSymType mapSymType qr symType asech daeFunction fibonacci isempty massMatrixForm quorem symprod asin dawson find isequal mathml rad2deg symsum asind dec2bin findDecoupledBlocks isequaln matlabFunction rank symvar asinh dec2hex findSymType isfinite matlabFunctionBlock rat tan assume decic finverse isinf max rdivide tand assumeAlso deg2rad fix ismember meijerG real tanh atan delete floor ismissing min rectangularPulse taylor atan2 det formula isnan minpoly reduceDAEIndex texlabel atan2d diag fortran isolate minus reduceDAEToODE times atand diff fourier isprime mixedUnits reduceDifferentialOrder toeplitz atanh dilog frac isreal mldivide reduceRedundancies transpose bernoulli dirac fresnelc isscalar mod release triangularPulse bernstein disp fresnels iztrans mpower rem tril bernsteinMatrix display full jacobiAM mrdivide reshape triu besselh divergence functionalDerivative jacobiCD mtimes resultant uint16 besseli divisors funm jacobiCN mustBeInteger rhs uint32 besselj double gamma jacobiCS mustBePositive root uint64 besselk ei gammaln jacobiDC nchoosek roots uint8 bessely eig gbasis jacobiDN ndims round uminus beta eliminate gcd jacobiDS ne rref unique cat ellipj ge jacobiNC nextprime rsums uplus ccode ellipke gegenbauerC jacobiND nnz saveobj vectorPotential ceil ellipticCE gradient jacobiNS nonzeros sec vertcat changeIntegrationVariable ellipticCK gt jacobiP norm secd vpa char ellipticCPi half jacobiSC not sech vpaintegral charpoly ellipticE harmonic jacobiSD nthroot series vpasolve chebyshevT ellipticF has jacobiSN null sign vpasum chebyshevU ellipticK hasSymType jacobiZeta numden signIm whittakerM checkUnits ellipticNome heaviside jacobian numel simplify whittakerW children ellipticPi hermiteH jordan odeFunction simplifyFraction wrightOmega chol eq hessian kron or simscapeEquation xor coeffs equationsToMatrix horner kroneckerDelta orth sin zeta collect erf horzcat kummerU pade sinc ztrans colon erfc htrans laguerreL partfrac sind colspace erfcinv hurwitzZeta lambertw pdeCoefficients single Static methods: cast eye loadobj ones empty inf nan zeros Call "methods('handle')" for methods of sym inherited from handle.
This shows that vpa() of a numeric value returns a sym, and that it is sym that has method diff
But perhaps it is different for symbolic formulae?
syms x
f = sin(x^2) + gamma(x+1) + 4503599627370491.5
f = 
B = vpa(f)
B = 
class(B)
ans = 'sym'
C = diff(B)
C = 
class(C)
ans = 'sym'
D = vpa(C)
D = 
class(D)
ans = 'sym'
Nope, it is sym all the way down, at least in these examples. Is there any other possibility resulting from vpa() ? Only one: in some cases vpa() can return a symfun instead of a sym.
Expecting vpa() to return anything other than sym (or sometimes, symfun) is a substantial misunderstanding of the Symbolic Toolbox
I'm a long time Matlab user and a relatively recent user of the Symbolic Math Toolbox (SMT). Since I started using the SMT, I've had it engraved in my head that SMT diff performs symbolic differentiation whereas base Matlab diff computes numeric differences. I've never confused those two operations. However, having said that, I do recall that when I started using the SMT I did think it odd that TMW chose to use diff for the SMT instead of something that sounds more intuitive, like 'derivative' or 'deriv' or something like that.
What is the function "vpa/diff" that you're trying to find? As I understand it, VPA variables are still of class sym and result in a call to the diff method of the sym class.
x = vpa(pi)
x = 
3.1415926535897932384626433832795
class(x)
ans = 'sym'
which diff(x)
/MATLAB/toolbox/symbolic/symbolic/@sym/diff.m % sym method
When will vpa return a symfun? Something like this?
syms x
f(x) = x/sym(pi)
f(x) = 
g(x) = vpa(f(x))
g(x) = 
Are there any other cases where vpa returns a symfun?
vpa() will return a symfun if the parameter to vpa() evaluates to a symfun .
syms x
f(x) = x/sym(pi)
f(x) = 
g1 = vpa(f)
g1(x) = 
g2 = vpa(3*x + tan(x)^2 + f)
g2(x) = 
g3 = vpa(f^3)
g3(x) = 
g4 = vpa( symfun(gamma(x)^2, x) )
g4(x) = 
g5 = vpa(f(x)) %f(x) is not a symfun
g5 = 
whos
Name Size Bytes Class Attributes cmdout 1x33 66 char f 1x1 8 symfun g1 1x1 8 symfun g2 1x1 8 symfun g3 1x1 8 symfun g4 1x1 8 symfun g5 1x1 8 sym x 1x1 8 sym
vpa() will not return a symfun in any case where the expression does not evaluate to a symfun. In the example of f(x): invoking a symfun() with a definite parameter evaluates the function on the given parameter, returning a symbolic expression.

Sign in to comment.

Pavel Holoborodko
Pavel Holoborodko on 25 Sep 2023
Edited: Pavel Holoborodko on 25 Sep 2023
@"Expecting vpa() to return anything other than sym (or sometimes, symfun) is a substantial misunderstanding of the Symbolic Toolbox...."
I see, the "vpa" is actually a factory function, generating the instances of "sym" class. I stand corrected, "sym.diff" is the full name of the function.
However, these technical details of SMT implementation have nothing to do with the point we are discussing. We discuss the semantics of the functions in SMT. In other words, what the SMT functions do (or should do) with the input arguments. The type they return is just a technicality in this context.
The SMT does much more than pure symbolic operations. Many functions in SMT apply numerical algorithms (using extended precision) to compute the output results. For example, take a look on EIG:
% LAMBDA = EIG(VPA(A)) and [V,D] = EIG(VPA(A)) compute numeric eigenvalues
% and eigenvectors using variable precision arithmetic. If A does not
% have a full set of eigenvectors, the columns of V will not be linearly
% independent.
Same for SVD, or other cases where result cannot be computed analytically or arbitrary-precision plays an important role.
(Of course the functions still return result of type "sym", but the result itself was computed numerically. I hope this distinction is clear.)
This is exactly the case with "diff". Extended precision is important for computing numerical derivatives/differences accurately, since it is a highly ill-conditioned problem (e.g. same as eigenvalues). Therefore it is quite natural to expect the "sym.diff" to have this functionality.
Besides, the original meaning of "diff" stands the same since the MATLAB inception - computing differences and numeric derivatives. It is not clear why SMT developers decided to change its semantics in SMT. They could've just introduced the new function for symbolic derivatives, as @Paul said. Ambiguity would be avoided.
Here is simple example for illustration:
>> A = vpa(rand(3)); % Store numeric matrix as 'sym'
>> ev = eig(A) % Computes e-vals NUMERICALLY using extended precision (see sym.eig)
ev =
1.752670342542888271728211766393
0.8398632587280311835223044391663
-0.18794383321803341008176836388774
>> class(ev) % Result stored as 'sym', but it is inherently numeric
ans =
'sym'
Similarly, I expect the "sym.diff" to provide the analogous functionality of built-in "diff" and to work as "sym.eig" above - compute result numerically using extended precision. But instead we see:
>> d = diff(A)
d =
[0, 0, 0]
[0, 0, 0]
[0, 0, 0]

11 Comments

James Tursa
James Tursa on 25 Sep 2023
Edited: James Tursa on 25 Sep 2023
I don't want this thread to develop into some type of flame war, since I think there are valid points to be made on both sides of the discussion. MATLAB has documented what their sym diff method does, so it is not an error for it to behave as it does. That being said, I am in agreement with @Paul that this was a poor choice on the part of MATLAB and a point of confusion for some that didn't need to be there. Picking "deriv" or something else would have been a better choice. I love working with MATLAB, but I put this one in the bucket of things I wish they had done differently (like making the half class an opaque class instead of a standard numeric class, continuing to hamstring the C-mex interface particularly for classdef objects, etc.).
Historic note:
MATLAB symbolic diff inherited its name from Maple, from back in the days when the Symbolic Toolbox was mex functions that called into Maplesoft's maple (though I have the suspicion that it might have been from the days of "Waterloo Maple" before they renamed to "Maplesoft")
Still, back then Mathworks could have used a different function name and have rewritten it to diff() on the way to and back from the maple symbolic engine. Might have been a bit tricky to get right in the days when symbolic expressions were mostly quoted strings.
Apparently the development of extended-precision computations in SMT stuck in the historical times as well:
>> rng(0);
>> digits(25); % Equivalent to 34-digits, as SMT uses 9 guard digits
>> A = vpa(randn(500));
>> tic; [V,D] = eig(A); toc;
Elapsed time is 7016.707970 seconds.
>> tic; [U,S,V] = svd(A); toc;
Elapsed time is 3570.744016 seconds.
These timings are the worst among Maple, Mathematica and MATLAB.
For comparison, here is timings of the (third-party) toolbox mentioned above (MCT):
>> rng(0);
>> mp.Digits(34);
>> A = randn(500,'mp');
>> tic; [V,D] = eig(A); toc;
Elapsed time is 9.198740 seconds.
>> tic; [U,S,V] = svd(A); toc;
Elapsed time is 2.548234 seconds.
* I am the author and main developer of the MCT. These timings can be easily verified since we provide trial versions for tests.
** We have no intentions to use this forum for advertisement or anything alike. The toolbox was mentioned on this thread by another person and it seems legitimate and relevant for discussion to provide more information. Hope this is not against forum rules.
Walter,
While these MCT timings are impressive by comparison, there are several important & wide-impact instances where Advanpix MCT causes reduction in speed of execution not exclusively due to multiprecision arithmetic itself. I documented one such instance here
where mere existence of MCT in Matlab's path causes 2X speed reduction of purely double precision Matlab programs (having no invocation of MCT functions).
There are more instances, like this one, that warrant embedding MCT directly into Matlab; as VPA is embedded. I am endeavoring to discover obstacles at Mathworks that would prevent that. Do you have any more objections, as yet unstated? Do you know of any other Matlab personnel or MVPs who would voice their concerns here?
Jon Dattorro
Let me see if I have this straight:
MCT uses a library that slows down MATLAB's double precision arithmetic.
... Therefore MCT should be embedded directly into MATLAB ???
I defer to Pavel, who can describe the exact cause.
But yes, it is one reason for tighter integration into Matlab.
My assumption is: this particular slowdown would be alleviated by embedding MCT more closely into Matlab.
Please, kindly, provide additional objections to integration and invite others to this discussion.
Jon Dattorro
Of course, MCT doesn't reduce the speed of double precision computations in MATLAB!
Toolbox overloads standard MATLAB routines for creating elementary arrays - zeros, ones, eye, etc. This is needed to support the 'mp' type as an argument in such functions, e.g. eye(100,'mp'). If requested type is not 'mp' then toolbox dispatches the call to built-in MATLAB functions:
if strcmpi(classname,'mp')
% Create multiprecision array of 'mp' objects
% ...
else
% Call built-in function if standard type is requested
[varargout{1:nargout}] = builtin(funcname,varargin{:}); % <-- 'builtin' is slow
end
As it turns out, the MATLAB's function 'builtin' is really slow. Wrapped call through 'builtin' is ~1.5-2 times slower compared to direct call:
>> tic; for k=1:100000, zeros(100); end; toc; % Direct call
Elapsed time is 0.307563 seconds.
>> tic; for k=1:100000, builtin('zeros',100); end; toc; % Wrapped call
Elapsed time is 0.480524 seconds.
This is where slowdown comes from. It affects only the creation of the elementary arrays, speed of computations are not affected in any way.
Overall, the integration mechanisms provided by MATLAB for third-party toolboxes are quite inefficient and slow. We all know the infamous overhead & limitations of MEX API, especially for custom types (classdef). Slowness of 'builtin' (example above) means that efficient third-party overloads for elementary arrays are (almost) impossible. The list is long.
What @Jon Dattorro is trying to say is that in case of deep integration with MATLAB core, all these inefficiencies can be avoided (for MCT or any other third-party toolbox).
@"Therefore MCT should be embedded directly into MATLAB ???"
Over the last 10+ years MCT has become a de facto standard in the field of extended precision computations in the MATLAB universe, thanks to its quality, speed, breadth and depth of functionality. This could be one of the reasons to consider integrating it into MATLAB's standard set of toolboxes (and deep integration with MATLAB's core).
The core MATLAB functionality is based on the works of third-party developers, starting from Intel MKL, FFTW, UMFPACK/CHOLMOD, MuPAD/Maple and to the ongoing contributions from the researchers from all over the world (e.g. all matrix functions are contributed by Nick Higham team, etc.). The independent researchers/developers are good for the ecosystem, as they are probably much more knowledgeable in the fields they work in.
Toolbox overloads standard MATLAB routines for creating elementary arrays - zeros, ones, eye, etc. This is needed to support the 'mp' type as an argument in such functions, e.g. eye(100,'mp').
If by "overloads" you mean your toolbox defines functions named zeros, ones, eye, etc. that shadow the built-in functions in MATLAB, you should not do that. Follow the instructions given on this documentation page instead to create methods of your class that allow the built-in functions to create instances of your class without affecting the behavior of those built-in functions on other data types.
Pavel Holoborodko
Pavel Holoborodko on 29 Sep 2023
Edited: Pavel Holoborodko on 29 Sep 2023
This is great suggestion, thank you. What version of MATLAB was this introduced? We try to maintain compatibility down to ~R2014-R2016 (our current code with shadowing was written in 2012 and probably compatible downto R2012x).
Is such proper overloading supported for other functions which accept 'classname' argument (e.g. tests matrices - pascal, hadamar, etc.)?
@...without affecting the behavior of those built-in functions on other data types.
We simply call built-in functions for all other data types (except ours). Thus no behavior is affected, besides one extra call to 'builtin'.
I do not know when it was introduced. I find it documented at least as far back as R2018b, https://www.mathworks.com/help/releases/R2018b/matlab/matlab_oop/class-support-for-array-creation-functions.html (which is as far as the archives go at the moment)
This covers only the simplest case when the user passes the typename argument explicitly (e.g. zeros(...,'mp')).
However, in most of the cases, existing/legacy code relies on default form, without the typename argument.
That is why our users requested to provide a special mode in which the array-creation functions generate extended-precision arrays by default (e.g. when no typename is specified, or it refers to standard floating-point types - 'double' or 'single').
This is useful when legacy code of considerable volume needs to be converted to extended precision:
>> mp.OverrideDoubleBasicArrays(true); % Basic arrays are created as 'mp' by default
>> A = zeros(3);
>> whos A
Name Size Bytes Class Attributes
A 3x3 272 mp
>> % Run legacy code with zeros(...), eye(...), etc.
>> % ...
>> mp.OverrideDoubleBasicArrays(false); % Return to default behavior
That is why our toolbox has to supersede the default functions and call them only when applicable (using 'builtin' calls). This approach works well, with the exception of slow 'builtin' calls.
***
Another interesting case is when the functions are called without arguments at all, e.g. zeros, eye, etc. This case is treated differently by MATLAB - it just returns 0 or 1, without even calling the global/built-in functions. No OOP involved, this is just hardcoded in MATLAB. Meaning these cases are impossible to overload for custom class types at all. Even by superseding the built-in functions.

Sign in to comment.

Jon Dattorro
Jon Dattorro on 3 Oct 2023

0 votes

Respectfully:
Are there any further objections to tight embedding of Advanpix Multiprecision Computing Toolbox into core Matlab functionality?
Pavel has made one concession (in documentation) and is willing to make another (in coding per Mr Lord).
Jon Dattorro
Jon Dattorro
Jon Dattorro on 10 Jan 2025

0 votes

Walter,
I am willing to fund (pay for) a newly issued license from Advanpix to their Multiprecision Computing Toolbox for Matlab; to someone who is qualified & willing to perform a detailed comparison of Matlab VPA and then present their results incrementally here.
I am not affiliated with Advanpix, btw. I am a dedicated independent user.
Jon Dattorro

2 Comments

I have other things on my plate at the moment; I cannot take this on.
Please keep this offer in mind, Walter.
Perhaps you might recommend someone.
Jon

Sign in to comment.

Tags

Asked:

on 12 Feb 2018

Commented:

on 10 Jan 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!