# lsqcurvefit

Solve nonlinear curve-fitting (data-fitting) problems in least-squares sense

## Description

Nonlinear least-squares solver

Find coefficients x that solve the problem

$\underset{x}{\mathrm{min}}{‖F\left(x,xdata\right)-ydata‖}_{2}^{2}=\underset{x}{\mathrm{min}}\sum _{i}{\left(F\left(x,xdat{a}_{i}\right)-ydat{a}_{i}\right)}^{2},$

given input data xdata, and the observed output ydata, where xdata and ydata are matrices or vectors, and F (x, xdata) is a matrix-valued or vector-valued function of the same size as ydata.

Optionally, the components of x are subject to the constraints

$\begin{array}{c}\text{lb}\le x\\ x\le \text{ub}\\ Ax\le b\\ \text{Aeq}x=\text{beq}\\ c\left(x\right)\le 0\\ \text{ceq}\left(x\right)=0.\end{array}$

The arguments x, lb, and ub can be vectors or matrices; see Matrix Arguments.

The lsqcurvefit function uses the same algorithm as lsqnonlin. lsqcurvefit simply provides a convenient interface for data-fitting problems.

Rather than compute the sum of squares, lsqcurvefit requires the user-defined function to compute the vector-valued function

$F\left(x,xdata\right)=\left[\begin{array}{c}F\left(x,xdata\right)\left(1\right)\\ F\left(x,xdata\right)\left(2\right)\\ ⋮\\ F\left(x,xdata\right)\left(k\right)\end{array}\right].$

example

x = lsqcurvefit(fun,x0,xdata,ydata) starts at x0 and finds coefficients x to best fit the nonlinear function fun(x,xdata) to the data ydata (in the least-squares sense). ydata must be the same size as the vector (or matrix) F returned by fun.

Note

Passing Extra Parameters explains how to pass extra parameters to the vector function fun(x), if necessary.

example

x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub) defines a set of lower and upper bounds on the design variables in x, so that the solution is always in the range lb  x  ub. You can fix the solution component x(i) by specifying lb(i) = ub(i).

Note

If the specified input bounds for a problem are inconsistent, the output x is x0 and the outputs resnorm and residual are [].

Components of x0 that violate the bounds lb ≤ x ≤ ub are reset to the interior of the box defined by the bounds. Components that respect the bounds are not changed.

example

x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,A,b,Aeq,beq) constrains the solution to satisfy the linear constraints

A xb

Aeq x = beq.

example

x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,A,b,Aeq,beq,nonlcon) constrain the solution to satisfy the nonlinear constraints in the nonlcon(x) function. nonlcon returns two outputs, c and ceq. The solver attempts to satisfy the constraints

c ≤ 0

ceq = 0.

example

x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options) or x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,A,b,Aeq,beq,nonlcon,options) minimizes with the optimization options specified in options. Use optimoptions to set these options. Pass empty matrices for lb and ub and for other input arguments if the arguments do not exist.

x = lsqcurvefit(problem) finds the minimum for problem, a structure described in problem.

[x,resnorm] = lsqcurvefit(___), for any input arguments, returns the value of the squared 2-norm of the residual at x: sum((fun(x,xdata)-ydata).^2).

example

[x,resnorm,residual,exitflag,output] = lsqcurvefit(___) additionally returns the value of the residual fun(x,xdata)-ydata at the solution x, a value exitflag that describes the exit condition, and a structure output that contains information about the optimization process.

[x,resnorm,residual,exitflag,output,lambda,jacobian] = lsqcurvefit(___) additionally returns a structure lambda whose fields contain the Lagrange multipliers at the solution x, and the Jacobian of fun at the solution x.

## Examples

collapse all

Suppose that you have observation time data xdata and observed response data ydata, and you want to find parameters $x\left(1\right)$ and $x\left(2\right)$ to fit a model of the form

$\text{ydata}=x\left(1\right)\mathrm{exp}\left(x\left(2\right)\text{xdata}\right).$

Input the observation times and responses.

xdata = ...
[0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 74.6 81.3];
ydata = ...
[455.2 428.6 124.1 67.3 43.2 28.1 13.1 -0.4 -1.3 -1.5];

Create a simple exponential decay model.

fun = @(x,xdata)x(1)*exp(x(2)*xdata);

Fit the model using the starting point x0 = [100,-1].

x0 = [100,-1];
x = lsqcurvefit(fun,x0,xdata,ydata)
Local minimum possible.

lsqcurvefit stopped because the final change in the sum of squares relative to
its initial value is less than the value of the function tolerance.
x = 1×2

498.8309   -0.1013

Plot the data and the fitted curve.

times = linspace(xdata(1),xdata(end));
plot(xdata,ydata,'ko',times,fun(x,times),'b-')
legend('Data','Fitted exponential')
title('Data and Fitted Curve')

Find the best exponential fit to data where the fitting parameters are constrained.

Generate data from an exponential decay model plus noise. The model is

$y=\mathrm{exp}\left(-1.3t\right)+\epsilon ,$

with $t$ ranging from 0 through 3, and $\epsilon$ normally distributed noise with mean 0 and standard deviation 0.05.

rng default % for reproducibility
xdata = linspace(0,3);
ydata = exp(-1.3*xdata) + 0.05*randn(size(xdata));

The problem is: given the data (xdata, ydata), find the exponential decay model $y=x\left(1\right)\mathrm{exp}\left(x\left(2\right)\text{xdata}\right)$ that best fits the data, with the parameters bounded as follows:

$0\le x\left(1\right)\le 3/4$

$-2\le x\left(2\right)\le -1.$

lb = [0,-2];
ub = [3/4,-1];

Create the model.

fun = @(x,xdata)x(1)*exp(x(2)*xdata);

Create an initial guess.

x0 = [1/2,-2];

Solve the bounded fitting problem.

x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
Local minimum found.

Optimization completed because the size of the gradient is less than
the value of the optimality tolerance.
x = 1×2

0.7500   -1.0000

Examine how well the resulting curve fits the data. Because the bounds keep the solution away from the true values, the fit is mediocre.

plot(xdata,ydata,'ko',xdata,fun(x,xdata),'b-')
legend('Data','Fitted exponential')
title('Data and Fitted Curve')

Create artificial data for a nonlinear model $y=a+b\mathrm{arctan}\left(t-{t}_{0}\right)+ct$ with parameters $a$, $b$, ${t}_{0}$, and $c$, for time $t$ from 2 to 7. Add noise to the data using randn.

a = 2; % x(1)
b = 4; % x(2)
t0 = 5; % x(3)
c = 1/2; % x(4)
xdata = linspace(2,7);
rng default
ydata = a + b*atan(xdata - t0) + c*xdata + 1/10*randn(size(xdata));

Plot the data.

plot(xdata,ydata,'ro')

Fit a nonlinear model to the data with the following constraints:

• All coefficients are between 0 and 7.

• ${x}_{1}+{x}_{2}\ge {x}_{3}+{x}_{4}$. You can write this constraint in the form A*x <= b using A = [-1 -1 1 1] and b = 0.

lb = zeros(4,1);
ub = 7*ones(4,1);
A = [-1 -1 1 1];
b = 0;

The myfun function at the end of this example creates the objective function for this model.

Solve the fitting problem starting from the point [1 2 3 1].

startpt = [1 2 3 1];
Aeq = [];
beq = [];
[x,res] = lsqcurvefit(@myfun,startpt,xdata,ydata,lb,ub,A,b,Aeq,beq)
Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 1×4

2.3447    4.0972    4.9979    0.4303

res = 1.2682

The returned solution is not far from the original point [2 4 5 1/2]. Plot the data against the curve from the solution point.

plot(xdata,ydata,'ro',xdata,myfun(x,xdata),'b-')

The returned solution matches the data pretty well. Is the constraint active?

A*x(:)
ans = -1.0137

The constraint is not active, because A*x < 0.

function F = myfun(x,xdata)
a = x(1);
b = x(2);
t0 = x(3);
c = x(4);
F = a + b*atan(xdata - t0) + c*xdata;
end

Create artificial data for a nonlinear model $y=a+b\mathrm{arctan}\left(t-{t}_{0}\right)+ct$ with parameters $a$, $b$, ${t}_{0}$, and $c$, for time $t$ from 2 to 7. Add noise to the data using randn.

a = 2; % x(1)
b = 4; % x(2)
t0 = 5; % x(3)
c = 1/2; % x(4)
xdata = linspace(2,7);
rng default
ydata = a + b*atan(xdata - t0) + c*xdata + 1/10*randn(size(xdata));

Plot the data.

plot(xdata,ydata,'ro')

Fit a nonlinear model to the data with the following constraints:

• All coefficients are between 0 and 7.

• ${x}_{1}^{2}+{x}_{2}^{2}\le {4}^{2}$

lb = zeros(4,1);
ub = 7*ones(4,1);

The problem has no linear constraints.

A = [];
b = [];
Aeq = [];
beq = [];

The myfun function at the end of this example creates the objective function for this model. The nlcon function at the end of this example creates the nonlinear constraint function.

Solve the fitting problem starting from the point [1 2 3 1].

startpt = [1 2 3 1];
[x,res] = lsqcurvefit(@myfun,startpt,xdata,ydata,lb,ub,A,b,Aeq,beq,@nlcon)
Feasible point with lower objective function value found, but optimality criteria not satisfied. See output.bestfeasible..

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
x = 1×4

1.3806    3.7542    5.0169    0.6337

res = 1.6018

The returned solution x is not at the original point [2 4 5 1/2] because the nonlinear constraint is violated at that point. Plot the data against the curve from the solution point and compute the constraint function.

plot(xdata,ydata,'ro',xdata,myfun(x,xdata),'b-')

[c,ceq] = nlcon(x)
c = -3.1307e-06
ceq =

[]

The nonlinear inequality constraint is active at the solution because c = 0 at the solution.

Even though the solution point is not at the original point, the solution curve matches the data pretty well.

function F = myfun(x,xdata)
a = x(1);
b = x(2);
t0 = x(3);
c = x(4);
F = a + b*atan(xdata - t0) + c*xdata;
end

function [c,ceq] = nlcon(x)
ceq = [];
c = x(1)^2 + x(2)^2 - 4^2;
end

Compare the results of fitting with the default 'trust-region-reflective' algorithm and the 'levenberg-marquardt' algorithm.

Suppose that you have observation time data xdata and observed response data ydata, and you want to find parameters $x\left(1\right)$ and $x\left(2\right)$ to fit a model of the form

$\text{ydata}=x\left(1\right)\mathrm{exp}\left(x\left(2\right)\text{xdata}\right).$

Input the observation times and responses.

xdata = ...
[0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 74.6 81.3];
ydata = ...
[455.2 428.6 124.1 67.3 43.2 28.1 13.1 -0.4 -1.3 -1.5];

Create a simple exponential decay model.

fun = @(x,xdata)x(1)*exp(x(2)*xdata);

Fit the model using the starting point x0 = [100,-1].

x0 = [100,-1];
x = lsqcurvefit(fun,x0,xdata,ydata)
Local minimum possible.

lsqcurvefit stopped because the final change in the sum of squares relative to
its initial value is less than the value of the function tolerance.
x = 1×2

498.8309   -0.1013

Compare the solution with that of a 'levenberg-marquardt' fit.

options = optimoptions('lsqcurvefit','Algorithm','levenberg-marquardt');
lb = [];
ub = [];
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options)
Local minimum possible.
lsqcurvefit stopped because the relative size of the current step is less than
the value of the step size tolerance.
x = 1×2

498.8309   -0.1013

The two algorithms converged to the same solution. Plot the data and the fitted exponential model.

times = linspace(xdata(1),xdata(end));
plot(xdata,ydata,'ko',times,fun(x,times),'b-')
legend('Data','Fitted exponential')
title('Data and Fitted Curve')

Compare the results of fitting with the default 'trust-region-reflective' algorithm and the 'levenberg-marquardt' algorithm. Examine the solution process to see which is more efficient in this case.

Suppose that you have observation time data xdata and observed response data ydata, and you want to find parameters $x\left(1\right)$ and $x\left(2\right)$ to fit a model of the form

$\text{ydata}=x\left(1\right)\mathrm{exp}\left(x\left(2\right)\text{xdata}\right).$

Input the observation times and responses.

xdata = ...
[0.9 1.5 13.8 19.8 24.1 28.2 35.2 60.3 74.6 81.3];
ydata = ...
[455.2 428.6 124.1 67.3 43.2 28.1 13.1 -0.4 -1.3 -1.5];

Create a simple exponential decay model.

fun = @(x,xdata)x(1)*exp(x(2)*xdata);

Fit the model using the starting point x0 = [100,-1].

x0 = [100,-1];
[x,resnorm,residual,exitflag,output] = lsqcurvefit(fun,x0,xdata,ydata);
Local minimum possible.

lsqcurvefit stopped because the final change in the sum of squares relative to
its initial value is less than the value of the function tolerance.

Compare the solution with that of a 'levenberg-marquardt' fit.

options = optimoptions('lsqcurvefit','Algorithm','levenberg-marquardt');
lb = [];
ub = [];
[x2,resnorm2,residual2,exitflag2,output2] = lsqcurvefit(fun,x0,xdata,ydata,lb,ub,options);
Local minimum possible.
lsqcurvefit stopped because the relative size of the current step is less than
the value of the step size tolerance.

Are the solutions equivalent?

norm(x-x2)
ans = 2.0642e-06

Yes, the solutions are equivalent.

Which algorithm took fewer function evaluations to arrive at the solution?

fprintf(['The ''trust-region-reflective'' algorithm took %d function evaluations,\n',...
'and the ''levenberg-marquardt'' algorithm took %d function evaluations.\n'],...
output.funcCount,output2.funcCount)
The 'trust-region-reflective' algorithm took 87 function evaluations,
and the 'levenberg-marquardt' algorithm took 72 function evaluations.

Plot the data and the fitted exponential model.

times = linspace(xdata(1),xdata(end));
plot(xdata,ydata,'ko',times,fun(x,times),'b-')
legend('Data','Fitted exponential')
title('Data and Fitted Curve')

The fit looks good. How large are the residuals?

fprintf(['The ''trust-region-reflective'' algorithm has residual norm %f,\n',...
'and the ''levenberg-marquardt'' algorithm has residual norm %f.\n'],...
resnorm,resnorm2)
The 'trust-region-reflective' algorithm has residual norm 9.504887,
and the 'levenberg-marquardt' algorithm has residual norm 9.504887.

## Input Arguments

collapse all

Function you want to fit, specified as a function handle or the name of a function. For the 'interior-point' algorithm, fun must be a function handle. fun is a function that takes two inputs: a vector or matrix x, and a vector or matrix xdata. fun returns a vector or matrix F, the objective function evaluated at x and xdata.

Note

fun should return fun(x,xdata), and not the sum-of-squares sum((fun(x,xdata)-ydata).^2). lsqcurvefit implicitly computes the sum of squares of the components of fun(x,xdata)-ydata. See Examples.

The function fun can be specified as a function handle for a function file:

x = lsqcurvefit(@myfun,x0,xdata,ydata)

where myfun is a MATLAB® function such as

function F = myfun(x,xdata)
F = ...     % Compute function values at x, xdata

fun can also be a function handle for an anonymous function.

f = @(x,xdata)x(1)*xdata.^2+x(2)*sin(xdata);
x = lsqcurvefit(f,x0,xdata,ydata);

lsqcurvefit passes x to your objective function in the shape of the x0 argument. For example, if x0 is a 5-by-3 array, then lsqcurvefit passes x to fun as a 5-by-3 array.

If the Jacobian can also be computed and the 'SpecifyObjectiveGradient' option is true, set by

then the function fun must return a second output argument with the Jacobian value J (a matrix) at x. By checking the value of nargout, the function can avoid computing J when fun is called with only one output argument (in the case where the optimization algorithm only needs the value of F but not J).

function [F,J] = myfun(x,xdata)
F = ...          % objective function values at x
if nargout > 1   % two output arguments
J = ...   % Jacobian of the function evaluated at x
end

If fun returns a vector (matrix) of m components and x has n elements, where n is the number of elements of x0, the Jacobian J is an m-by-n matrix where J(i,j) is the partial derivative of F(i) with respect to x(j). (The Jacobian J is the transpose of the gradient of F.) For more information, see Writing Vector and Matrix Objective Functions.

Example: @(x,xdata)x(1)*exp(-x(2)*xdata)

Data Types: char | function_handle | string

Initial point, specified as a real vector or real array. Solvers use the number of elements in x0 and the size of x0 to determine the number and size of variables that fun accepts.

Example: x0 = [1,2,3,4]

Data Types: double

Input data for model, specified as a real vector or real array. The model is

ydata = fun(x,xdata),

where xdata and ydata are fixed arrays, and x is the array of parameters that lsqcurvefit changes to search for a minimum sum of squares.

Example: xdata = [1,2,3,4]

Data Types: double

Response data for model, specified as a real vector or real array. The model is

ydata = fun(x,xdata),

where xdata and ydata are fixed arrays, and x is the array of parameters that lsqcurvefit changes to search for a minimum sum of squares.

The ydata array must be the same size and shape as the array fun(x0,xdata).

Example: ydata = [1,2,3,4]

Data Types: double

Lower bounds, specified as a real vector or real array. If the number of elements in x0 is equal to the number of elements in lb, then lb specifies that

x(i) >= lb(i) for all i.

If numel(lb) < numel(x0), then lb specifies that

x(i) >= lb(i) for 1 <= i <= numel(lb).

If lb has fewer elements than x0, solvers issue a warning.

Example: To specify that all x components are positive, use lb = zeros(size(x0)).

Data Types: double

Upper bounds, specified as a real vector or real array. If the number of elements in x0 is equal to the number of elements in ub, then ub specifies that

x(i) <= ub(i) for all i.

If numel(ub) < numel(x0), then ub specifies that

x(i) <= ub(i) for 1 <= i <= numel(ub).

If ub has fewer elements than x0, solvers issue a warning.

Example: To specify that all x components are less than 1, use ub = ones(size(x0)).

Data Types: double

Linear inequality constraints, specified as a real matrix. A is an M-by-N matrix, where M is the number of inequalities, and N is the number of variables (number of elements in x0). For large problems, pass A as a sparse matrix.

A encodes the M linear inequalities

A*x <= b,

where x is the column vector of N variables x(:), and b is a column vector with M elements.

For example, consider these inequalities:

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30,

Specify the inequalities by entering the following constraints.

A = [1,2;3,4;5,6];
b = [10;20;30];

Example: To specify that the x components sum to 1 or less, use A = ones(1,N) and b = 1.

Data Types: double

Linear inequality constraints, specified as a real vector. b is an M-element vector related to the A matrix. If you pass b as a row vector, solvers internally convert b to the column vector b(:). For large problems, pass b as a sparse vector.

b encodes the M linear inequalities

A*x <= b,

where x is the column vector of N variables x(:), and A is a matrix of size M-by-N.

For example, consider these inequalities:

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30.

Specify the inequalities by entering the following constraints.

A = [1,2;3,4;5,6];
b = [10;20;30];

Example: To specify that the x components sum to 1 or less, use A = ones(1,N) and b = 1.

Data Types: double

Linear equality constraints, specified as a real matrix. Aeq is an Me-by-N matrix, where Me is the number of equalities, and N is the number of variables (number of elements in x0). For large problems, pass Aeq as a sparse matrix.

Aeq encodes the Me linear equalities

Aeq*x = beq,

where x is the column vector of N variables x(:), and beq is a column vector with Me elements.

For example, consider these inequalities:

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20,

Specify the inequalities by entering the following constraints.

Aeq = [1,2,3;2,4,1];
beq = [10;20];

Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and beq = 1.

Data Types: double

Linear equality constraints, specified as a real vector. beq is an Me-element vector related to the Aeq matrix. If you pass beq as a row vector, solvers internally convert beq to the column vector beq(:). For large problems, pass beq as a sparse vector.

beq encodes the Me linear equalities

Aeq*x = beq,

where x is the column vector of N variables x(:), and Aeq is a matrix of size Me-by-N.

For example, consider these equalities:

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20.

Specify the equalities by entering the following constraints.

Aeq = [1,2,3;2,4,1];
beq = [10;20];

Example: To specify that the x components sum to 1, use Aeq = ones(1,N) and beq = 1.

Data Types: double

Nonlinear constraints, specified as a function handle. nonlcon is a function that accepts a vector or array x and returns two arrays, c(x) and ceq(x).

• c(x) is the array of nonlinear inequality constraints at x. lsqcurvefit attempts to satisfy

 c(x) <= 0 for all entries of c. (1)
• ceq(x) is the array of nonlinear equality constraints at x. lsqcurvefit attempts to satisfy

 ceq(x) = 0 for all entries of ceq. (2)

For example,

x = lsqcurvefit(@myfun,x0,xdata,ydata,lb,ub,A,b,Aeq,beq,@mycon,options)

where mycon is a MATLAB function such as

function [c,ceq] = mycon(x)
c = ...     % Compute nonlinear inequalities at x.
ceq = ...   % Compute nonlinear equalities at x.
If the gradients of the constraints can also be computed and the SpecifyConstraintGradient option is true, as set by
then nonlcon must also return, in the third and fourth output arguments, GC, the gradient of c(x), and GCeq, the gradient of ceq(x). GC and GCeq can be sparse or dense. If GC or GCeq is large, with relatively few nonzero entries, save running time and memory in the 'interior-point' algorithm by representing them as sparse matrices. For more information, see Nonlinear Constraints.

Data Types: function_handle

Optimization options, specified as the output of optimoptions or a structure such as optimset returns.

Some options apply to all algorithms, and others are relevant for particular algorithms. See Optimization Options Reference for detailed information.

Some options are absent from the optimoptions display. These options appear in italics in the following table. For details, see View Optimization Options.

Example: options = optimoptions('lsqcurvefit','FiniteDifferenceType','central')

Problem structure, specified as a structure with the following fields:

Field NameEntry

objective

Objective function

x0

Initial point for x

xdata

Input data for objective function

ydata

Output data to be matched by objective function

Aineq

Matrix for linear inequality constraints

bineq

Vector for linear inequality constraints

Aeq

Matrix for linear equality constraints

beq

Vector for linear equality constraints
lbVector of lower bounds
ubVector of upper bounds

nonlcon

Nonlinear constraint function

solver

'lsqcurvefit'

options

Options created with optimoptions

You must supply at least the objective, x0, solver, xdata, ydata, and options fields in the problem structure.

Data Types: struct

## Output Arguments

collapse all

Solution, returned as a real vector or real array. The size of x is the same as the size of x0. Typically, x is a local solution to the problem when exitflag is positive. For information on the quality of the solution, see When the Solver Succeeds.

Squared norm of the residual, returned as a nonnegative real. resnorm is the squared 2-norm of the residual at x: sum((fun(x,xdata)-ydata).^2).

Value of objective function at solution, returned as an array. In general, residual = fun(x,xdata)-ydata.

Reason the solver stopped, returned as an integer.

 1 Function converged to a solution x. 2 Change in x is less than the specified tolerance, or Jacobian at x is undefined. 3 Change in the residual is less than the specified tolerance. 4 Relative magnitude of search direction is smaller than the step tolerance. 0 Number of iterations exceeds options.MaxIterations or number of function evaluations exceeded options.MaxFunctionEvaluations. -1 A plot function or output function stopped the solver. -2 No feasible point found. The bounds lb and ub are inconsistent, or the solver stopped at an infeasible point.

Information about the optimization process, returned as a structure with fields:

 firstorderopt Measure of first-order optimality iterations Number of iterations taken funcCount The number of function evaluations cgiterations Total number of PCG iterations ('trust-region-reflective' and 'interior-point' algorithms) stepsize Final displacement in x constrviolation Maximum of constraint functions ('interior-point' algorithm) bestfeasible Best (lowest objective function) feasible point encountered ('interior-point' algorithm). A structure with these fields:xfvalfirstorderoptconstrviolationIf no feasible point is found, the bestfeasible field is empty. For this purpose, a point is feasible when the maximum of the constraint functions does not exceed options.ConstraintTolerance.The bestfeasible point can differ from the returned solution point x for a variety of reasons. For an example, see Obtain Best Feasible Point. algorithm Optimization algorithm used message Exit message

Lagrange multipliers at the solution, returned as a structure with fields:

 lower Lower bounds corresponding to lb upper Upper bounds corresponding to ub ineqlin Linear inequalities corresponding to A and b eqlin Linear equalities corresponding to Aeq and beq ineqnonlin Nonlinear inequalities corresponding to the c in nonlcon eqnonlin Nonlinear equalities corresponding to the ceq in nonlcon

Jacobian at the solution, returned as a real matrix. jacobian(i,j) is the partial derivative of fun(i) with respect to x(j) at the solution x.

For problems with active constraints at the solution, jacobian is not useful for estimating confidence intervals.

## Limitations

• The trust-region-reflective algorithm does not solve underdetermined systems; it requires that the number of equations, i.e., the row dimension of F, be at least as great as the number of variables. In the underdetermined case, lsqcurvefit uses the Levenberg-Marquardt algorithm.

• lsqcurvefit can solve complex-valued problems directly. Note that constraints do not make sense for complex values, because complex numbers are not well-ordered; asking whether one complex value is greater or less than another complex value is nonsensical. For a complex problem with bound constraints, split the variables into real and imaginary parts. Do not use the 'interior-point' algorithm with complex data. See Fit a Model to Complex-Valued Data.

• The preconditioner computation used in the preconditioned conjugate gradient part of the trust-region-reflective method forms JTJ (where J is the Jacobian matrix) before computing the preconditioner. Therefore, a row of J with many nonzeros, which results in a nearly dense product JTJ, can lead to a costly solution process for large problems.

• If components of x have no upper (or lower) bounds, lsqcurvefit prefers that the corresponding components of ub (or lb) be set to inf (or -inf for lower bounds) as opposed to an arbitrary but very large positive (or negative for lower bounds) number.

You can use the trust-region reflective algorithm in lsqnonlin, lsqcurvefit, and fsolve with small- to medium-scale problems without computing the Jacobian in fun or providing the Jacobian sparsity pattern. (This also applies to using fmincon or fminunc without computing the Hessian or supplying the Hessian sparsity pattern.) How small is small- to medium-scale? No absolute answer is available, as it depends on the amount of virtual memory in your computer system configuration.

Suppose your problem has m equations and n unknowns. If the command J = sparse(ones(m,n)) causes an Out of memory error on your machine, then this is certainly too large a problem. If it does not result in an error, the problem might still be too large. You can find out only by running it and seeing if MATLAB runs within the amount of virtual memory available on your system.

## Algorithms

The Levenberg-Marquardt and trust-region-reflective methods are based on the nonlinear least-squares algorithms also used in fsolve.

• The default trust-region-reflective algorithm is a subspace trust-region method and is based on the interior-reflective Newton method described in [1] and [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region-Reflective Least Squares.

• The Levenberg-Marquardt method is described in references [4], [5], and [6]. See Levenberg-Marquardt Method.

The 'interior-point' algorithm uses the fmincon 'interior-point' algorithm with some modifications. For details, see Modified fmincon Algorithm for Constrained Least Squares.

## Alternative Functionality

### App

The Optimize Live Editor task provides a visual interface for lsqcurvefit.

## References

[1] Coleman, T.F. and Y. Li. “An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds.” SIAM Journal on Optimization, Vol. 6, 1996, pp. 418–445.

[2] Coleman, T.F. and Y. Li. “On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds.” Mathematical Programming, Vol. 67, Number 2, 1994, pp. 189–224.

[3] Dennis, J. E. Jr. “Nonlinear Least-Squares.” State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269–312.

[4] Levenberg, K. “A Method for the Solution of Certain Problems in Least-Squares.” Quarterly Applied Mathematics 2, 1944, pp. 164–168.

[5] Marquardt, D. “An Algorithm for Least-squares Estimation of Nonlinear Parameters.” SIAM Journal Applied Mathematics, Vol. 11, 1963, pp. 431–441.

[6] Moré, J. J. “The Levenberg-Marquardt Algorithm: Implementation and Theory.” Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, 1977, pp. 105–116.

[7] Moré, J. J., B. S. Garbow, and K. E. Hillstrom. User Guide for MINPACK 1. Argonne National Laboratory, Rept. ANL–80–74, 1980.

[8] Powell, M. J. D. “A Fortran Subroutine for Solving Systems of Nonlinear Algebraic Equations.” Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz, ed., Ch.7, 1970.

## Version History

Introduced before R2006a

expand all