curve fitting exponential function with two terms

Update: I need help curve fitting this set of points with an exponential function with two terms nicely.
% Curve Fit
x = [6500 6350 6000 5400 4500];
y = [0 0.25 0.5 0.75 1.0];
theFit=fit(x' , y', 'exp2')
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = 0.5 b = 0 c = 0 d = 0

10 Comments

This does not look exponential to me. Or did you mean x is a function of y? (That doesn't look exponential, either.)
Also, fitting two terms of an exponential, with only 5 data points, seems very ill-advised.
Can you tell us exactly what functional form you want to use?
x = [6500 6350 5800 4900 4500];
y = [0 0.25 0.5 0.75 1.0];
scatter(x,y)
I used the following but it does not look appropriate. I am not sure how to modify it.
x = [6500 6350 5800 4900 4500];
y = [0 0.25 0.5 0.75 1.0];
theFit=fit(x' , y', 'exp2')
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = -2.905e-15 (-8.856e-13, 8.798e-13) b = 0.004984 (-0.04157, 0.05153) c = 9.409 (-64.2, 83.02) d = -0.0005043 (-0.002143, 0.001135)
plot(theFit , x , y)
Looks appropriate to me...
It's not plotting in the following code, so I assumed it's not appropriate. See below.
clear;
clc;
close all;
% Curve Fit
x = [6500 6350 5800 4900 4500];
y = [0 0.25 0.5 0.75 1.0];
theFit=fit(x' , y', 'exp2')
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = -2.905e-15 (-8.856e-13, 8.798e-13) b = 0.004984 (-0.04157, 0.05153) c = 9.409 (-64.2, 83.02) d = -0.0005043 (-0.002143, 0.001135)
plot(theFit , x , y)
% Monthly Expense Cost
exp = [6500 6350 5800 4900 4500];
expUtility = [0 0.25 0.5 0.75 1.0];
% Utility Points
figure;
h = plot(exp,expUtility,'o');
set(h(1),'MarkerFaceColor','b')
xlim([4500 6500]);ylim([0 1.25]);
yticks([expUtility 1.25]);
grid on;
xlabel('Monthly Expense Cost ($)');
ylabel('Utility');
legend('Utility Points');
% Utility Curve Fitting
a = -2.905e-15;
b = 0.004984;
c = 9.409;
d = -0.0005043;
curveX = linspace(4500,6500);
curveY = a*exp(b*curveX) + c*exp(d*curveX);
Array indices must be positive integers or logical values.
hold on;
plot(curveX,curveY,'Color','b');
legend('Utility Points','Utility Curve Fit');
% Consistency Check
% New Lottery
costCheck = 4650;
costUtilityCheck = 0.875;
plot(costCheck,costUtilityCheck,'*','Color','r');
legend('Utility Lottery Points','Utility Curve Fit','Consistency Check Point');
% Local Risk Aversion
x_q = 4500:1:6500;
q = (a*b^2*exp(b.*x_q) + c*d^2*exp(d.*x_q)) ./ (a*b*exp(b.*x_q) + c*d*exp(d.*x_q));
Use a different variable name here,
exp = [6500 6350 5800 4900 4500];
otherwise, you cannot use exp() as a the exponential function later.
Additionally, to obtain curveY, it is sufficient to do,
curveY = theFit(curveX);
I needed to reassess my points to the following and I am now getting something way off. Sorry for the confusion .
% Curve Fit
x = [6500 6350 6000 5400 4500];
y = [0 0.25 0.5 0.75 1.0];
theFit=fit(x' , y', 'exp2')
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = 0.5 b = 0 c = 0 d = 0
plot(theFit , x , y)
Did you reverse your x and y???
x and y are as shown, but the plot just comes out like a line.
Sorry, but I don't believe you. When I swap x and y, the fit looks great.
% Curve Fit
y = [6500 6350 6000 5400 4500];
x = [0 0.25 0.5 0.75 1.0];
theFit=fit(x' , y', 'exp2')
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = -5.893e+07 (-5.463e+16, 5.463e+16) b = 0.5577 (-3.089e+04, 3.089e+04) c = 5.893e+07 (-5.463e+16, 5.463e+16) d = 0.5576 (-3.089e+04, 3.089e+04)
plot(theFit , x , y)

Sign in to comment.

Answers (3)

You should normalize your x data
% Curve Fit
x = [6500 6350 6000 5400 4500];
x=(x-mean(x))/std(x);
y = [0 0.25 0.5 0.75 1.0];
Also, I would recommend downloading fminspleas from the File Exchange
and using it to generate an initial guess for fit():
e=@(a,xd)exp(a*xd);
flist={@(p,xd) e(p(1),xd) , @(p,xd) e(p(2),xd)};
[bd,ac]=fminspleas(flist,[-1,1],x, y);
theFit=fit(x',y','exp2','StartPoint',[ac(1),bd(1),ac(2), bd(2) ])
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = 0.6747 (0.0581, 1.291) b = -0.2594 (-0.9322, 0.4135) c = -0.04679 (-0.4881, 0.3946) d = 2.634 (-6.423, 11.69)
plot(theFit,x,y)

5 Comments

Well, you can undo the normalization after the fit is performed. It's a 1-1 change of variables.
I mentioned in the comments that I needed to change my points (noticed an error in my work). I am trying to curve fit the following now into an exponential function with two terms. But, I get a straight line.
% Curve Fit
x = [6500 6350 6000 5400 4500];
y = [0 0.25 0.5 0.75 1.0];
theFit=fit(x' , y', 'exp2')
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = 0.5 b = 0 c = 0 d = 0
Question has been updated...
x = [6500 6350 6000 5400 4500];
xt = (x-mean(x))/std(x);
y = [0 0.25 0.5 0.75 1.0];
theFit=fit(xt',y','exp2')
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) Coefficients (with 95% confidence bounds): a = -0.04696 (-0.4897, 0.3958) b = 2.631 (-6.421, 11.68) c = 0.6749 (0.05714, 1.293) d = -0.2592 (-0.9328, 0.4145)
theFit.b = theFit.b/std(x);
Warning: Setting coefficient values clears confidence bounds information.
theFit.a = theFit.a*exp(-theFit.b*mean(x));
theFit.d = theFit.d/std(x);
theFit.c = theFit.c*exp(-theFit.d*mean(x));
theFit
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) Coefficients: a = -4.278e-10 b = 0.00322 c = 4.181 d = -0.0003172
plot(theFit,x,y)
I mentioned in the comments that I needed to change my points (noticed an error in my work).
All answers in this thread have been demonstrated using your new points.

Sign in to comment.

You can also use fit()'s normalizer,
x = [6500 6350 6000 5400 4500];
y = [0 0.25 0.5 0.75 1.0];
theFit=fit(x',y','exp2','Normalize','on')
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) where x is normalized by mean 5750 and std 817 Coefficients (with 95% confidence bounds): a = -0.04696 (-0.4897, 0.3958) b = 2.631 (-6.421, 11.68) c = 0.6749 (0.05714, 1.293) d = -0.2592 (-0.9328, 0.4145)
plot(theFit,x,y)

2 Comments

Thanks. Also I know, I can plot the 'theFit' directly. But, why isn't line 32 plotting CurveY (see last plot)? I need those a, b, c, and d values/the equation to work for future steps in the code I am developing.
clear;
clc;
close all;
% Curve Fit
x = [6500 6350 6000 5400 4500];
y = [0 0.25 0.5 0.75 1.0];
theFit=fit(x',y','exp2','Normalize','on')
theFit =
General model Exp2: theFit(x) = a*exp(b*x) + c*exp(d*x) where x is normalized by mean 5750 and std 817 Coefficients (with 95% confidence bounds): a = -0.04696 (-0.4897, 0.3958) b = 2.631 (-6.421, 11.68) c = 0.6749 (0.05714, 1.293) d = -0.2592 (-0.9328, 0.4145)
plot(theFit,x,y)
% Monthly Cost
cost = [6500 6350 6000 5400 4500];
costUtility = [0 0.25 0.5 0.75 1.0];
% Plot Utility Points
figure;
plot(cost,costUtility,'*');
xlim([4500 6500]);ylim([0 1.25]);
yticks([costUtility 1.25]);
grid on;
xlabel('Monthly Cost ($)');
ylabel('Utility');
legend('Utility Points');
% Add utility curve fit
a = -0.04696;
b = 2.631;
c = 0.6749;
d = -0.2592;
curveX = linspace(4500,6500);
curveY = a*exp(b*curveX) + c*exp(d*curveX);
hold on;
plot(curveX,curveY,'Color','b');
legend('Utility Points','Utility Curve Fit');
If you need to explicitly manipulate the coefficients and fit function, you'll have to do the normalization manually:
% Curve Fit
x = [6500 6350 6000 5400 4500];
xmu=mean(x);
xstd=std(x);
y = [0 0.25 0.5 0.75 1.0];
theFit=fit((x-xmu)'/xstd,y','exp2');
% Monthly Cost
cost = x;
costUtility = y;
% Plot Utility Points
figure;
plot(cost,costUtility,'*');
xlim([4500 6500]);ylim([0 1.25]);
yticks([costUtility 1.25]);
grid on;
xlabel('Monthly Cost ($)');
ylabel('Utility');
legend('Utility Points');
% Add utility curve fit
coeff=num2cell(coeffvalues(theFit));
[a,b,c,d]=deal(coeff{:});
curveX = linspace(4500,6500);
X=(curveX-xmu)/xstd;
curveY = a*exp(b*X) + c*exp(d*X);
hold on;
plot(curveX,curveY,'Color','b');
legend('Utility Points','Utility Curve Fit');

Sign in to comment.

If taking the fitting function as: y=a*exp(b*x) + c*exp(d*x);
and also taking the data like below directly;
x = [6500 6350 5800 4900 4500];
y = [0 0.25 0.5 0.75 1.0];
The unique stable result should be:
Sum Squared Error (SSE): 0.00221359211819696
Root of Mean Square Error (RMSE): 0.0210408750682901
Correlation Coef. (R): 0.998229714370904
R-Square: 0.996462562653016
Parameter Best Estimate
--------- -------------
a 11.4185972844776
b -0.000545792212445247
c -1.22298024843855E-22
d 0.00759142468815435
If add one more parameter, that is the fitting function become: y=a*exp(b*x)+c*exp(d*x)+e; the result will be perfect:
Sum Squared Error (SSE): 4.41584921368883E-29
Root of Mean Square Error (RMSE): 2.97181736103982E-15
Correlation Coef. (R): 1
R-Square: 1
Parameter Best Estimate
--------- -------------
a -2.47788945923639E-15
b 0.00505297753885221
c 332.002937639918
d -0.00141137023194644
e 0.420769231934917

11 Comments

The unique stable result should be:
I don't know about unique, but the condition number below suggests that it may not be stable, assuming it's a least squares fit.
syms a b c d x
syms a b c d; syms x y [5,1]
f([a,b,c,d])= sum( ( a*exp(b*x) + c*exp(d*x) - y ).^2 )/2;
Hfun=matlabFunction(hessian(f));
aa = 11.4185972844776 ;
bb = -0.000545792212445247;
cc = -1.22298024843855E-22;
dd = 0.00759142468815435 ;
xx = [6500 6350 5800 4900 4500];
yy = [0 0.25 0.5 0.75 1.0];
args=num2cell([aa,bb,cc,dd,xx,yy]);
Hess=Hfun(args{:});
cond(Hess)
ans = Inf
For this problem, the objective value of the least squares fit (SSE) is 0.00221359211819697, corresponded parameters are:
1:
Sum Squared Error (SSE): 0.00221359211819697
Root of Mean Square Error (RMSE): 0.0210408750682901
Correlation Coef. (R): 0.998229714367005
R-Square: 0.996462562645233
Parameter Best Estimate
--------- -------------
a 11.4185970800621
b -0.000545792208439445
c -1.22298133472063E-22
d 0.00759142455222768
or 2: (actually same as 1, only the order of parameter is different)
Sum Squared Error (SSE): 0.00221359211819697
Root of Mean Square Error (RMSE): 0.0210408750682901
Correlation Coef. (R): 0.9982297143666
R-Square: 0.996462562644425
Parameter Best Estimate
--------- -------------
a -1.22297963729857E-22
b 0.007591424764864
c 11.4185972678769
d -0.000545792211987959
There are maybe other solutions, but should be local solutions, not global ones, or in other word, If there are any other solutions with the SSE values less than 0.00221359211819697, give it out please!
Matt J
Matt J on 12 Nov 2023
Edited: Matt J on 12 Nov 2023
There are maybe other solutions, but should be local solutions, not global ones
Why?
If there are any other solutions with the SSE values less than 0.00221359211819697, give it out please!
Like I said, I don't know if there are other global solutions. I'm just not sure it is stable.
The definition of a globally optimal solution is one where there are no other feasible solutions with better objective function values, so, until you find and obtain a better result, I can confidently say that the result I provided is the global optimal solution and all others are local optimal solutions, at least so far.
Matt J
Matt J on 12 Nov 2023
Edited: Matt J on 12 Nov 2023
You keep saying it is the global unique solution, and I keep saying I don't disagree. But even if that's true, none of what you've argued establishes that it is stable.
I run ten times (can be more if necessary) with random start-values of each parameters, the result of each run is the same, if this is not "stable", what is?
That is a good sign, but you would probably also have to perturb the x,y inputs by small amounts to see if the solution remains approximately the same.
I have to assume, though, that the software is doing some sort of data-prenormalization. We can see from the OP's experience that random initialization doesn't reach the same solution when you don't do this.
@Matt J If you have tried out the 1stOpt software, maybe you'll change your views and opinions!
The code looks like below, very simple, end-users do not need to do anything else, such as data-prenormalization you said
Function y=a*exp(b*x) + c*exp(d*x);
Data;
x = [6500 6350 5800 4900 4500];
y = [0 0.25 0.5 0.75 1.0];
Less than half of second, the output will be:
Sum Squared Error (SSE): 0.00221359211819697
Root of Mean Square Error (RMSE): 0.0210408750682901
Correlation Coef. (R): 0.998229714368578
R-Square: 0.996462562648373
Parameter Best Estimate
--------- -------------
a -1.22298041267836E-22
b 0.00759142466699956
c 11.4185972934291
d -0.000545792212517027
No need for users to care about anything else, such as the initial start-values of parameters, every run will output same results.
Taking one more example, if the fitting function become: y=a*exp(b*x) + c*exp(d*x) + e/x;
Function y=a*exp(b*x) + c*exp(d*x) + e/x;
Data;
x = [6500 6350 5800 4900 4500];
y = [0 0.25 0.5 0.75 1.0];
Multi-run will produce two solutions:
1:
Sum Squared Error (SSE): 2.37582717939609E-30
Root of Mean Square Error (RMSE): 6.89322446957314E-16
Correlation Coef. (R): 1
R-Square: 1
Parameter Best Estimate
--------- -------------
a -5.46715403508488E-16
b 0.00527771888944181
c 1272.58844177386
d -0.00179769933992128
e 2743.69929384194
2:
Sum Squared Error (SSE): 0
Root of Mean Square Error (RMSE): 0
Correlation Coef. (R): 1
R-Square: 1
Parameter Best Estimate
--------- -------------
a -25.5545324787999
b -0.000174029056569443
c -1.10229181033535E-13
d 0.00449303003927482
e 57050.1019982362
What it means is that the start-values of each run are not fixed, but highly likely to be random values.
If you have tried out the 1stOpt software, maybe you'll change your views and opinions!
I wouldn't change my opinions because that was never my opinion. My speculation was that the software was doing some sort of normalization internally, not the user.
It would be a good suggestion for Mathwork, although not claer how 1stOpt process such problem internally.
MathWorks' fit() routine does have an internal normalization step which can be enabled,
However, if 1stOpt does something similar, it appears to be smart enough to post-transform the parameters and undo the effect of the data normalization. fit() does not do that.

Sign in to comment.

Categories

Asked:

on 11 Nov 2023

Commented:

on 13 Nov 2023

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!