Multiple ways to solve this problem. Sorry, but quad precision is not an option. There is no way to achieve quad precision, at least without recourse to some other tool, or exporting the results to a language where quad precision is available.
Brute force, without resource to the symbolic toolbox ... Use my HPF toolbox, which is free to download. I'll use 100 decimal digits of precision, even though
s = hpf('-6.471606349773182e-23');
d = hpf('-6.604939683106516e-23');
See that HPF has no problem in evaluating expressions like those that you have.
So A is just
A = hpf('1.449056603773585e+19')*(log(1+exp(s))-log(1+exp(d)))
As a check with the symbolic toolbox, it yields:
s = sym('-6.471606349773182e-23');
d = sym('-6.604939683106516e-23');
A = sym('1.449056603773585e+19')*(log(1+exp(s))-log(1+exp(d)))
In fact, the symbolic toolbox loses some precision here, because it used limited precision to do some of those computations. And, since you only provided those numbers to about 20 significant digits, what can you expect? But it does reasonably well. The exact result, reporting it as 32 significant digits is:
Can we get that result by using only numerical methods in double precision? Brute force will of course fail, as you found.
A = 1.449056603773585e+19*(log(1+exp(s))-log(1+exp(d)))
So in order to get a viable result for A using double precision, one needs to resort to numerical analysis. We need to understand that even if we could compute this for very tiny numbers s and d, the results log(1+exp(s)) and log(1+exp(d)) are very close to each other. Using HPF, the exact results out to 40 decimal digits are:
So, when we subtract the two results, we will see massive subtractive cancellation. HPF is able to give the exact results here because I did the computations in sufficiently high precision, but then only reported the first 40 digits.
Going back to double precision, even tools in MATLAB like log1p must fail, because exp(s) returns exactly 1. As far as MATLAB is concerned, with doubles for s and d, these two results are identical.
log(1+exp(s)) == log(1+exp(d))
Ok, so can we do something? Yes. With some mental effort. Note that is s is TINY, then exp(s) is essentially 1. And then we lose accuracy because of the massive subtractive cancellation, because both logs are returning essentially log(2).
What if we computed the result of
Of course, we need to do this computation accurately, for very tiny x. First see that
log(1+exp(x)) - log(2) = log((1+exp(x))/2)
When we do the subtraction, those -log(2) terms will exactly cancel each other out. But they enable us to do the computation in double precision. We will need to use a Taylor series. I'll use the symbolic toolbox here, because I'm too lazy to do the algebra on paper.
-x^4/192 + x^2/8 + x/2
For tiny values of x of magnitude smaller than eps, we can truncate that series to one term. But we can see what happens if we use 2 terms anyway.
logapprox1 = @(x) x/2;
logapprox2 = @(x) x/2 + x.^2/8;
s = -6.471606349773182e-23;
d = -6.604939683106516e-23;
format long g
A = 1.449056603773585e+19*(logapprox1(s)-logapprox1(d))
A = 1.449056603773585e+19*(logapprox2(s)-logapprox2(d))
As you can see, the two results are both reasonably accurate. Remember, the exact result (using scientific notation) was:
So our double precision computation was correct out to almost the full precision available using doubles.
The trick was to use finesse. Brute force only works if you are willing and able to do the computation using the proper tools that will support brute force. As I showed, even the symbolic toolbox missed a few digits out there at the end.