Which statistic is minimized in curve fitting app
Show older comments
Dear colleagues,
The statistics which evaluate the goodness of in the Curve Fitting app are shown in the picture below.

At this link is given a description of each of these.
My question is what parameter/statistic is minimized to make a fit? In another discussion I have read something about 2-norm. So is some of shown above a 2-norm?
Answers (2)
My question is what parameter/statistic is minimized to make a fit?
SSE (maybe weighted if you specify weights for the measurement data), i.e. the (weighted) 2-norm of the error.
"fminimax" tries to minimize the Inf-norm of the error.
1 Comment
John D'Errico
on 26 Jan 2025
Edited: John D'Errico
on 26 Jan 2025
1 vote
The sum of squares of the residuals is mimimized. SSE effectively stands for Sum of Squares of Errors.
What is the 2-norm? Just the square root of the sum of squares. Is mimimizing the sum of squares, or the sqrt of the sum of squares different from each other? NNNNNNOOOOOOO! They are the same, in terms of the result. If you find the minimizer for the sum of squares, then it is the SAME minimizer if you minimize the sqrt of the sum of squares. But NONE of the numbers shown in that picture are the 2-norm. Again though, that is irrelevant. You CAN view the SSE as the square of the 2-norm.
Note that if you supply weights, then fit will minimize a weighted sum of squares, given the weights you supplied. Still no real difference, except that it is a weighted sum.
Finally, fit allows a robust option, even though you did not ask about robust.
A robust fit is usually performed as an iteratively reweighted recursive scheme. There the fit is done using no weights initially. Now the points with the largest residuals are downweighted by some scheme. (There are several choices for robust fitting. You will need to do some reading to decide exactly what method is used. I think the default, if you choose robust is the bi-square method.) Then the fit is again done, using the new set of weights. This operation is repeated until convergence.
12 Comments
Boyan
on 26 Jan 2025
John D'Errico
on 26 Jan 2025
Yes. A residual is what is left over, when you subtract the model as fit from the data. Do you see that y_i - y_i(hat) inside the sum?
So the residual is assumed to be the errors, or the noise in your data. And while there is a subtle distinction between noise and lack of fit, if we assume the model you have psoed is the correct model for your system, then there should be no lack of fit. Therefore the residuals are presumed to be the additive noise in your data, ie., the errors.
Boyan
on 29 Jan 2025
John D'Errico
on 5 Feb 2025
Edited: John D'Errico
on 5 Feb 2025
You clearly do not seem to accept the two people who have told you that SSE stands for Sum of Squares of Errors. Why would anything more that I tell you change that?
Contact the MathWorks, DIRECTLY.
Possibly they can convince you.
Torsten
on 5 Feb 2025
Although it's self-evident for us, I'm also surprised that it's not mentionned - at least I couldn't find it on the page for the Curve Fitting app. Maybe because the app mainly addresses the wish for a comfortable fitting environment and not so much to explain theory.
"lsqcurvefit" at least mentions the objective to be optimized:
Description
Nonlinear least-squares solver
Find coefficients x that solve the problem
minx‖F(x,xdata)−ydata‖22=minx∑i(F(x,xdatai)−ydatai)2,
given input data xdata, and the observed output ydata, where xdata and ydata are matrices or vectors, and F (x, xdata) is a matrix-valued or vector-valued function of the same size as ydata.
Walter Roberson
on 5 Feb 2025
The curvefitting app page does not mention any algorithms at all; it is just documentation on how to start the tool.
The curvefitting app page refers to the documentation for fit
The fit() documentation talks about several different options for what is to be fit.
As we do not know which options are being chosen in the curve fitting application, we cannot say what is being fit.
But what should be common to all "different options for what is to be fit" is that the (weighted) sum of errors squared is minimized, shouldn't it ? Don't you agree that this should be mentionned somewhere ?
Walter Roberson
on 5 Feb 2025
If the "robust" option is used, I do not know that SSE is being minimized. "least absolute residual method" and "bisquare weights method" do not sound like SSE.
Torsten
on 5 Feb 2025
After you mentionned the different methods, I found this page that might be helpful for the OP:
Walter Roberson
on 8 Feb 2025
Boyan
on 8 Feb 2025
I think this is pretty clear now.
From the web page:
A least-squares fitting method calculates model coefficients that minimize the sum of squared errors (SSE), which is also called the residual sum of squares. Given a set of n data points, the residual for the ith data point ri is calculated with the formula
ri=yi−ˆyi
where yi is the ith observed response value and ŷi is the ith fitted response value. The SSE is given by
SSE=n∑i=1r2i=n∑i=1(yi−ˆyi)2
If the response data error does not have constant variance across the values of the predictor data, the fit can be influenced by poor quality data. The weighted least-squares fitting method uses scaling factors called weights to influence the effect of a response value on the calculation of model coefficients. Use the weighted least-squares fitting method if the weights are known, or if the weights follow a particular form.
The weighted least-squares fitting method introduces weights in the formula for the SSE, which becomes
SSE=n∑i=1wi(yi−ˆyi)2
where wi are the weights.
Categories
Find more on Get Started with Curve Fitting Toolbox in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!