Which statistic is minimized in curve fitting app

Dear colleagues,
The statistics which evaluate the goodness of in the Curve Fitting app are shown in the picture below.
At this link is given a description of each of these.
My question is what parameter/statistic is minimized to make a fit? In another discussion I have read something about 2-norm. So is some of shown above a 2-norm?

Answers (2)

Torsten
Torsten on 26 Jan 2025
Edited: Torsten on 26 Jan 2025
My question is what parameter/statistic is minimized to make a fit?
SSE (maybe weighted if you specify weights for the measurement data), i.e. the (weighted) 2-norm of the error.
"fminimax" tries to minimize the Inf-norm of the error.

1 Comment

Boyan
Boyan on 26 Jan 2025
Edited: Boyan on 26 Jan 2025
Hi, Torsten,
I have to say that in a paper. So, could you give me some link from the Matlab documentation where this is said - SSE is minimized.
Best regards,
Boyan

Sign in to comment.

John D'Errico
John D'Errico on 26 Jan 2025
Edited: John D'Errico on 26 Jan 2025
The sum of squares of the residuals is mimimized. SSE effectively stands for Sum of Squares of Errors.
What is the 2-norm? Just the square root of the sum of squares. Is mimimizing the sum of squares, or the sqrt of the sum of squares different from each other? NNNNNNOOOOOOO! They are the same, in terms of the result. If you find the minimizer for the sum of squares, then it is the SAME minimizer if you minimize the sqrt of the sum of squares. But NONE of the numbers shown in that picture are the 2-norm. Again though, that is irrelevant. You CAN view the SSE as the square of the 2-norm.
Note that if you supply weights, then fit will minimize a weighted sum of squares, given the weights you supplied. Still no real difference, except that it is a weighted sum.
Finally, fit allows a robust option, even though you did not ask about robust.
A robust fit is usually performed as an iteratively reweighted recursive scheme. There the fit is done using no weights initially. Now the points with the largest residuals are downweighted by some scheme. (There are several choices for robust fitting. You will need to do some reading to decide exactly what method is used. I think the default, if you choose robust is the bi-square method.) Then the fit is again done, using the new set of weights. This operation is repeated until convergence.

12 Comments

Hi, John,
I quote the first sentence of your answer:
"The sum of squares of the residuals is mimimized. SSE effectively stands for Sum of Squares of Errors."
Here I have question and it is: Is Sum of Square Errors the same like Sum of Squares of the Residials?
Yes. A residual is what is left over, when you subtract the model as fit from the data. Do you see that y_i - y_i(hat) inside the sum?
So the residual is assumed to be the errors, or the noise in your data. And while there is a subtle distinction between noise and lack of fit, if we assume the model you have psoed is the correct model for your system, then there should be no lack of fit. Therefore the residuals are presumed to be the additive noise in your data, ie., the errors.
Dear John,
Could you help me with the sourse of Matlab documentation, where it is said that SSE /Sum of Squares of Errors/ is minimized by Curve fitter.
I didn't succed to find such a statement, but I need for a scientific paper.
You clearly do not seem to accept the two people who have told you that SSE stands for Sum of Squares of Errors. Why would anything more that I tell you change that?
Contact the MathWorks, DIRECTLY.
Possibly they can convince you.
Although it's self-evident for us, I'm also surprised that it's not mentionned - at least I couldn't find it on the page for the Curve Fitting app. Maybe because the app mainly addresses the wish for a comfortable fitting environment and not so much to explain theory.
"lsqcurvefit" at least mentions the objective to be optimized:
Description
Nonlinear least-squares solver
Find coefficients x that solve the problem
minxF(x,xdata)ydata22=minxi(F(x,xdatai)ydatai)2,
given input data xdata, and the observed output ydata, where xdata and ydata are matrices or vectors, and F (x, xdata) is a matrix-valued or vector-valued function of the same size as ydata.
The curvefitting app page does not mention any algorithms at all; it is just documentation on how to start the tool.
The curvefitting app page refers to the documentation for fit
The fit() documentation talks about several different options for what is to be fit.
As we do not know which options are being chosen in the curve fitting application, we cannot say what is being fit.
But what should be common to all "different options for what is to be fit" is that the (weighted) sum of errors squared is minimized, shouldn't it ? Don't you agree that this should be mentionned somewhere ?
If the "robust" option is used, I do not know that SSE is being minimized. "least absolute residual method" and "bisquare weights method" do not sound like SSE.
After you mentionned the different methods, I found this page that might be helpful for the OP:
Thanks for the explanations. I am convinced that that stands for Sum of Squares of Errors. The problem was that I didn't find written in the Matlab documentation that Curve Fitter minimizes SSE for nonlinear models.
@Torsten found a Matlab paper that it is said somehow, not clearly/simply said though.
I think this is pretty clear now.
From the web page:
A least-squares fitting method calculates model coefficients that minimize the sum of squared errors (SSE), which is also called the residual sum of squares. Given a set of n data points, the residual for the ith data point ri is calculated with the formula
ri=yiˆyi
where yi is the ith observed response value and ŷi is the ith fitted response value. The SSE is given by
SSE=ni=1r2i=ni=1(yiˆyi)2
If the response data error does not have constant variance across the values of the predictor data, the fit can be influenced by poor quality data. The weighted least-squares fitting method uses scaling factors called weights to influence the effect of a response value on the calculation of model coefficients. Use the weighted least-squares fitting method if the weights are known, or if the weights follow a particular form.
The weighted least-squares fitting method introduces weights in the formula for the SSE, which becomes
SSE=ni=1wi(yiˆyi)2
where wi are the weights.

Sign in to comment.

Categories

Products

Asked:

on 26 Jan 2025

Edited:

on 8 Feb 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!