Find minimum of constrained nonlinear multivariable function
Finds the minimum of a problem specified by
$$\underset{x}{\mathrm{min}}f(x)\text{suchthat}\{\begin{array}{c}c(x)\le 0\\ ceq(x)=0\\ A\cdot x\le b\\ Aeq\cdot x=beq\\ lb\le x\le ub,\end{array}$$
b and beq are vectors, A and Aeq are matrices, c(x) and ceq(x) are functions that return vectors, and f(x) is a function that returns a scalar. f(x), c(x), and ceq(x) can be nonlinear functions.
x, lb, and ub can be passed as vectors or matrices; see Matrix Arguments.
x = fmincon(fun,x0,A,b)
x = fmincon(fun,x0,A,b,Aeq,beq)
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
x = fmincon(problem)
[x,fval] = fmincon(...)
[x,fval,exitflag] = fmincon(...)
[x,fval,exitflag,output] = fmincon(...)
[x,fval,exitflag,output,lambda] = fmincon(...)
[x,fval,exitflag,output,lambda,grad]
= fmincon(...)
[x,fval,exitflag,output,lambda,grad,hessian]
= fmincon(...)
fmincon
attempts to find a constrained minimum
of a scalar function of several variables starting at an initial estimate.
This is generally referred to as constrained nonlinear
optimization or nonlinear programming.
Note: Passing Extra Parameters explains how to pass extra parameters to the objective function and nonlinear constraint functions, if necessary. 
x = fmincon(fun,x0,A,b)
starts
at x0
and attempts to find a minimizer x
of
the function described in fun
subject to the linear
inequalities A*x ≤ b
. x0
can
be a scalar, vector, or matrix.
x = fmincon(fun,x0,A,b,Aeq,beq)
minimizes fun
subject
to the linear equalities Aeq*x = beq
and A*x ≤ b
. If no inequalities
exist, set A = []
and b = []
.
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)
defines
a set of lower and upper bounds on the design variables in x
,
so that the solution is always in the range lb
≤ x
≤ ub
.
If no equalities exist, set Aeq = []
and beq
= []
. If x(i)
is unbounded below, set lb(i)
= Inf
, and if x(i)
is unbounded above,
set ub(i) = Inf
.
Note:
If the specified input bounds for a problem are inconsistent,
the output Components of 
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)
subjects
the minimization to the nonlinear inequalities c(x)
or
equalities ceq(x)
defined in nonlcon
. fmincon
optimizes
such that c(x) ≤ 0
and ceq(x) = 0
. If no bounds exist, set lb
= []
and/or ub = []
.
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
minimizes
with the optimization options specified in options
.
Use optimoptions
to set these
options. If there are no nonlinear inequality or equality constraints,
set nonlcon = []
.
x = fmincon(problem)
finds the minimum
for problem
, where problem
is
a structure described in Input Arguments.
Create the problem
structure by exporting a problem
from Optimization app, as described in Exporting Your Work.
[x,fval] = fmincon(...)
returns
the value of the objective function fun
at the
solution x
.
[x,fval,exitflag] = fmincon(...)
returns
a value exitflag
that describes the exit condition
of fmincon
.
[x,fval,exitflag,output] = fmincon(...)
returns
a structure output
with information about the optimization.
[x,fval,exitflag,output,lambda] = fmincon(...)
returns
a structure lambda
whose fields contain the Lagrange
multipliers at the solution x
.
[x,fval,exitflag,output,lambda,grad]
= fmincon(...)
returns the value of the gradient of fun
at
the solution x
.
[x,fval,exitflag,output,lambda,grad,hessian]
= fmincon(...)
returns the value of the Hessian at the
solution x
. See fmincon Hessian.
Function Arguments describes
the arguments passed to fmincon
. Options provides the functionspecific
details for the options
values. This section provides
functionspecific details for fun
, nonlcon
,
and problem
.
 The
function to be minimized. x = fmincon(@myfun,x0,A,b) where function f = myfun(x) f = ... % Compute function value at x
x = fmincon(@(x)norm(x)^2,x0,A,b); If
the gradient of options = optimoptions('fmincon','GradObj','on') fun must
return the gradient vector g(x) in the second output
argument.If the Hessian matrix can also be computed and the If the Hessian matrix can be computed and the  
 Linear
constraint matrices  
 The function that computes the nonlinear inequality
constraints x = fmincon(@myfun,x0,A,b,Aeq,beq,lb,ub,@mycon) where function [c,ceq] = mycon(x) c = ... % Compute nonlinear inequalities at x. ceq = ... % Compute nonlinear equalities at x. GradConstr option
is 'on' , as set byoptions = optimoptions('fmincon','GradConstr','on') nonlcon must
also return, in the third and fourth output arguments, GC ,
the gradient of c(x) , and GCeq ,
the gradient of ceq(x) . GC and GCeq can
be sparse or dense. If GC or GCeq is
large, with relatively few nonzero entries, save running time and
memory in the interiorpoint algorithm by representing
them as sparse matrices. For more information, see Nonlinear Constraints.
 
problem 
 Objective function  
 Initial point for x  
 Matrix for linear inequality constraints  
 Vector for linear inequality constraints  
 Matrix for linear equality constraints  
 Vector for linear equality constraints  
lb  Vector of lower bounds  
ub  Vector of upper bounds  
 Nonlinear constraint function  
 'fmincon'  
 Options created with optimoptions 
Function Arguments describes
arguments returned by fmincon
. This section provides
functionspecific details for exitflag
, lambda
,
and output
:
 Integer identifying the
reason the algorithm terminated. The following lists the values of  
All Algorithms:  
 Firstorder optimality measure was less than  
 Number of iterations exceeded  
 Stopped by an output function or plot function.  
 No feasible point was found.  
 
 Change in  
 
 Change in the objective function value was less than  
 
 Magnitude of the search direction was less than 2*  
 Magnitude of directional derivative in search direction
was less than 2*  
 
 Objective function at current iteration went below  
 Gradient at  
 Hessian at  
 Structure containing the
Lagrange multipliers at the solution  
lower  Lower bounds  
upper  Upper bounds  
ineqlin  Linear inequalities  
eqlin  Linear equalities  
ineqnonlin  Nonlinear inequalities  
eqnonlin  Nonlinear equalities  
 Structure containing information about the optimization. The fields of the structure are:  
iterations  Number of iterations taken  
funcCount  Number of function evaluations  
lssteplength  Size of line search step relative to search direction
(  
constrviolation  Maximum of constraint functions  
stepsize  Length of last displacement in  
algorithm  Optimization algorithm used  
cgiterations  Total number of PCG iterations (  
firstorderopt  Measure of firstorder optimality  
message  Exit message 
fmincon
uses a Hessian as an optional input.
This Hessian is the second derivatives of the Lagrangian
(see Equation 31), namely,
$${\nabla}_{xx}^{2}L(x,\lambda )={\nabla}^{2}f(x)+{\displaystyle \sum {\lambda}_{i}{\nabla}^{2}{c}_{i}(x)}+{\displaystyle \sum {\lambda}_{i}{\nabla}^{2}ce{q}_{i}(x)}.$$  (141) 
The various fmincon
algorithms handle input
Hessians differently:
The activeset
and sqp
algorithms
do not accept a usersupplied Hessian. They compute a quasiNewton
approximation to the Hessian of the Lagrangian.
The trustregionreflective
algorithm
can accept a usersupplied Hessian as the final output of the objective
function. Since this algorithm has only bounds or linear constraints,
the Hessian of the Lagrangian is same as the Hessian of the objective
function. See Writing Scalar Objective Functions for
details on how to pass the Hessian to fmincon
.
Indicate that you are supplying a Hessian by
options = optimoptions('fmincon','Algorithm','trustregionreflective','Hessian','usersupplied');
The interiorpoint
algorithm
can accept a usersupplied Hessian as a separately defined function—it
is not computed in the objective function. The syntax is
hessian = hessianfcn(x, lambda)
hessian
is
an nbyn matrix, sparse or
dense, where n is the number of variables. If hessian
is
large and has relatively few nonzero entries, save running time and
memory by representing hessian
as a sparse matrix. lambda
is
a structure with the Lagrange multiplier vectors associated with the
nonlinear constraints:lambda.ineqnonlin lambda.eqnonlin
fmincon
computes
the structure lambda
. hessianfcn
must
calculate the sums in Equation 141. Indicate that you are supplying
a Hessian byoptions = optimoptions('fmincon','Algorithm','interiorpoint',... 'Hessian','usersupplied','HessFcn',@hessianfcn);
For an example, see fmincon InteriorPoint Algorithm with Analytic Hessian.
The interiorpoint
algorithm has
several more options for Hessians, see Choose Input Hessian for interiorpoint fmincon:
options = optimoptions('fmincon','Hessian','bfgs');
fmincon
calculates the Hessian by a dense
quasiNewton approximation. This is the default.
options = optimoptions('fmincon','Hessian','lbfgs');
fmincon
calculates the Hessian by a limitedmemory,
largescale quasiNewton approximation. The default memory, 10 iterations,
is used.
options = optimoptions('fmincon','Hessian',{'lbfgs',positive integer});
fmincon
calculates the Hessian by a limitedmemory,
largescale quasiNewton approximation. The positive integer specifies
how many past iterations should be remembered.
options = optimoptions('fmincon','Hessian','findiffgrads',...
'SubproblemAlgorithm','cg','GradObj','on',...
'GradConstr','on');
fmincon
calculates a Hessiantimesvector
product by finite differences of the gradient(s). You must supply
the gradient of the objective function, and also gradients of nonlinear
constraints if they exist.
The interiorpoint
and trustregionreflective
algorithms
allow you to supply a Hessian multiply function. This function gives
the result of a Hessiantimesvector product, without computing the
Hessian directly. This can save memory.
The syntax for the two algorithms differ:
For the interiorpoint
algorithm,
the syntax is
W = HessMultFcn(x,lambda,v);
The result W
should be the product H*v
,
where H
is the Hessian of the Lagrangian at x
(see Equation 141), lambda
is
the Lagrange multiplier (computed by fmincon
),
and v
is a vector of size nby1.
Set options as follows:
options = optimoptions('fmincon','Algorithm','interiorpoint','Hessian','usersupplied',... 'SubproblemAlgorithm','cg','HessMult',@HessMultFcn);
Supply the function HessMultFcn
, which returns
an nby1 vector, where n is
the number of dimensions of x. The HessMult
option
enables you to pass the result of multiplying the Hessian by a vector
without calculating the Hessian.
The trustregionreflective
algorithm
does not involve lambda
:
W = HessMultFcn(H,v);
The result W = H*v
. fmincon
passes H
as
the value returned in the third output of the objective function (see Writing Scalar Objective Functions). fmincon
also
passes v
, a vector or matrix with n rows.
The number of columns in v
can vary, so write HessMultFcn
to
accept an arbitrary number of columns. H
does not
have to be the Hessian; rather, it can be anything that enables you
to calculate W = H*v
.
Set options as follows:
options = optimoptions('fmincon','Algorithm','trustregionreflective',... 'Hessian','usersupplied','HessMult',@HessMultFcn);
For an example using a Hessian multiply function with the trustregionreflective
algorithm,
see Minimization with Dense Structured Hessian, Linear Equalities.
Optimization options used by fmincon
. Some
options apply to all algorithms, and others are relevant for particular
algorithms. Use optimoptions
to
set or change the values in options
. See Optimization Options Reference for detailed information.
All four algorithms use these options:
Algorithm  Choose the optimization algorithm:
For information on choosing the algorithm, see Choosing the Algorithm. The
If you select the The 
DerivativeCheck  Compare usersupplied derivatives
(gradients of objective or constraints) to finitedifferencing derivatives.
The choices are 
Diagnostics  Display diagnostic information
about the function to be minimized or solved. The choices are 
 Maximum change in variables for
finitedifference gradients (a positive scalar). The default is 
 Minimum change in variables for
finitedifference gradients (a positive scalar). The default is 
Display  Level of display:

FinDiffRelStep  Scalar or vector step size factor. When you set
and central finite differences are
Scalar 
FinDiffType  Finite differences, used to estimate
gradients, are either

FunValCheck  Check whether objective function
and constraints values are valid. 
GradConstr  Gradient for nonlinear constraint
functions defined by the user. When set to 
GradObj  Gradient for the objective function
defined by the user. See the preceding description of 
MaxFunEvals  Maximum number of function evaluations
allowed, a positive integer. The default value for all algorithms
except 
MaxIter  Maximum number of iterations allowed,
a positive integer. The default value for all algorithms except 
OutputFcn  Specify one or more userdefined
functions that an optimization function calls at each iteration, either
as a function handle or as a cell array of function handles. The default
is none ( 
 Plots various measures of progress
while the algorithm executes, select from predefined plots or write
your own. Pass a function handle or a cell array of function handles.
The default is none (
For information on writing a custom plot function, see Plot Functions. 
TolCon  Tolerance on the constraint violation,
a positive scalar. The default is 
TolFun  Termination tolerance on the function
value, a positive scalar. The default is 
TolX  Termination tolerance on 
 Typical The 
UseParallel  When 
The 'trustregionreflective'
algorithm uses
these options:
Hessian  If  
HessMult  Function handle for Hessian multiply function.
For largescale structured problems, this function computes the Hessian
matrix product W = hmfun(Hinfo,Y) where The
first argument must be the same as the third argument returned by
the objective function [f,g,Hinfo] = fun(x)
See Minimization with Dense Structured Hessian, Linear Equalities for an example.  
HessPattern  Sparsity pattern of the Hessian
for finite differencing. Set Use In
the worst case, when the structure is unknown, do not set  
MaxPCGIter  Maximum number of PCG (preconditioned
conjugate gradient) iterations, a positive scalar. The default is  
PrecondBandWidth  Upper bandwidth of preconditioner
for PCG, a nonnegative integer. By default, diagonal preconditioning
is used (upper bandwidth of 0). For some problems, increasing the
bandwidth reduces the number of PCG iterations. Setting  
TolPCG  Termination tolerance on the PCG
iteration, a positive scalar. The default is 
The 'activeset'
algorithm uses these options:
MaxSQPIter  Maximum number of SQP iterations
allowed, a positive integer. The default is 
RelLineSrchBnd  Relative bound (a real nonnegative
scalar value) on the line search step length such that the total displacement
in x satisfies Δx(i) ≤ relLineSrchBnd· max(x(i),typicalx(i)).
This option provides control over the magnitude of the displacements
in x for cases in which the solver takes steps
that are considered too large. The default is no bounds ( 
RelLineSrchBndDuration  Number of iterations for which
the bound specified in 
 Termination tolerance on inner
iteration SQP constraint violation, a positive scalar. The default
is 
The 'interiorpoint'
algorithm uses these
options:
AlwaysHonorConstraints  The default 
HessFcn  Function handle to a usersupplied
Hessian (see Hessian). This is used
when the 
Hessian  Chooses how

HessMult  Handle to a usersupplied function
that gives a Hessiantimesvector product (see Hessian). This is used when the 
InitBarrierParam  Initial barrier value, a positive
scalar. Sometimes it might help to try a value above the default 
InitTrustRegionRadius  Initial radius of the trust region, a positive scalar. On badly scaled problems it might help to choose a value smaller than the default $$\sqrt{n}$$, where n is the number of variables. 
MaxProjCGIter  A tolerance (stopping criterion)
for the number of projected conjugate gradient iterations; this is
an inner iteration, not the number of iterations of the algorithm.
This positive integer has a default value of 
ObjectiveLimit  A tolerance (stopping criterion)
that is a scalar. If the objective function value goes below 
ScaleProblem 

SubproblemAlgorithm  Determines how the iteration step
is calculated. The default, 
TolProjCG  A relative tolerance (stopping
criterion) for projected conjugate gradient algorithm; this is for
an inner iteration, not the algorithm iteration. This positive scalar
has a default of 
TolProjCGAbs  Absolute tolerance (stopping criterion)
for projected conjugate gradient algorithm; this is for an inner iteration,
not the algorithm iteration. This positive scalar has a default of 
The 'sqp'
algorithm uses these options:
ObjectiveLimit  A tolerance (stopping criterion)
that is a scalar. If the objective function value goes below 
ScaleProblem 

Find values of x that minimize f(x) = –x_{1}x_{2}x_{3},
starting at the point x = [10;10;10]
,
subject to the constraints:
0 ≤ x_{1} + 2x_{2} + 2x_{3} ≤ 72.
Write a file that returns a scalar value f
of
the objective function evaluated at x
:
function f = myfun(x) f = x(1) * x(2) * x(3);
Rewrite the constraints as both less than or equal to a constant,
–x_{1}–2x_{2}–2x_{3} ≤
0
x_{1} + 2x_{2} +
2x_{3}≤ 72
Since both constraints are linear, formulate them as the matrix inequality A·x ≤ b, where
A = [1 2 2; ... 1 2 2]; b = [0;72];
Supply a starting point and invoke an optimization routine:
x0 = [10;10;10]; % Starting guess at the solution [x,fval] = fmincon(@myfun,x0,A,b);
After fmincon
stops, the solution
is
x x = 24.0000 12.0000 12.0000
where the function value is
fval fval = 3.4560e+03
and linear inequality constraints evaluate to be less than or
equal to 0
:
A*xb ans = 72.0000 0.0000
To use the trustregionreflective algorithm, you must
Supply the gradient of the objective function in fun
.
Set GradObj
to 'on'
in options
.
Specify the feasible region using one, but not both, of the following types of constraints:
Upper and lower bounds constraints
Linear equality constraints, in which the equality
constraint matrix Aeq
cannot have more rows than
columns
You cannot use inequality constraints with the trustregionreflective
algorithm. If the preceding conditions are not met, fmincon
reverts
to the activeset algorithm.
fmincon
returns a warning if you do not provide
a gradient and the Algorithm
option is 'trustregionreflective'
. fmincon
permits
an approximate gradient to be supplied, but this option is not recommended;
the numerical behavior of most optimization methods is considerably
more robust when the true gradient is used.
The trustregionreflective method in fmincon
is
in general most effective when the matrix of second derivatives, i.e.,
the Hessian matrix H(x), is
also computed. However, evaluation of the true Hessian matrix is not
required. For example, if you can supply the Hessian sparsity structure
(using the HessPattern
option in options
), fmincon
computes
a sparse finitedifference approximation to H(x).
If x0
is not strictly feasible, fmincon
chooses
a new strictly feasible (centered) starting point.
If components of x have no upper (or lower)
bounds, fmincon
prefers that the corresponding
components of ub
(or lb
) be
set to Inf
(or Inf
for lb
)
as opposed to an arbitrary but very large positive (or negative in
the case of lower bounds) number.
Take note of these characteristics of linearly constrained minimization:
A dense (or fairly dense) column of matrix Aeq
can
result in considerable fill and computational cost.
fmincon
removes (numerically) linearly
dependent rows in Aeq
; however, this process involves
repeated matrix factorizations and therefore can be costly if there
are many dependencies.
Each iteration involves a sparse leastsquares solution with matrix
$$\overline{Aeq}=Ae{q}^{T}{R}^{T},$$
where R^{T} is the Cholesky factor of the preconditioner. Therefore, there is a potential conflict between choosing an effective preconditioner and minimizing fill in $$\overline{Aeq}$$.
If equality constraints are present and dependent equalities
are detected and removed in the quadratic subproblem, 'dependent'
appears
under the Procedures
heading (when you ask for
output by setting the Display
option to 'iter'
).
The dependent equalities are only removed when the equalities are
consistent. If the system of equalities is not consistent, the subproblem
is infeasible and 'infeasible'
appears under the Procedures
heading.
fmincon
is a gradientbased method that
is designed to work on problems where the objective and constraint
functions are both continuous and have continuous first derivatives.
When
the problem is infeasible, fmincon
attempts to
minimize the maximum constraint value.
The 'trustregionreflective'
algorithm does
not allow equal upper and lower bounds. For example, if lb(2)==ub(2)
, fmincon
gives
this error:
Equal upper and lower bounds not permitted in trustregionreflective algorithm. Use either interiorpoint or SQP algorithms instead.
There are two different syntaxes for passing a Hessian, and
there are two different syntaxes for passing a HessMult
function;
one for trustregionreflective
, and another for interiorpoint
.
For trustregionreflective
, the Hessian
of the Lagrangian is the same as the Hessian of the objective function.
You pass that Hessian as the third output of the objective function.
For interiorpoint
, the Hessian of the Lagrangian
involves the Lagrange multipliers and the Hessians of the nonlinear
constraint functions. You pass the Hessian as a separate function
that takes into account both the position x
and
the Lagrange multiplier structure lambda
.
TrustRegionReflective Coverage and Requirements
Additional Information Needed  For Large Problems 

Must provide gradient for 

[1] Byrd, R.H., J. C. Gilbert, and J. Nocedal, "A Trust Region Method Based on Interior Point Techniques for Nonlinear Programming," Mathematical Programming, Vol 89, No. 1, pp. 149–185, 2000.
[2] Byrd, R.H., Mary E. Hribar, and Jorge Nocedal, "An Interior Point Algorithm for LargeScale Nonlinear Programming, SIAM Journal on Optimization," SIAM Journal on Optimization, Vol 9, No. 4, pp. 877–900, 1999.
[3] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418–445, 1996.
[4] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for LargeScale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189–224, 1994.
[5] Gill, P.E., W. Murray, and M.H. Wright, Practical Optimization, London, Academic Press, 1981.
[6] Han, S.P., "A Globally Convergent Method for Nonlinear Programming," Vol. 22, Journal of Optimization Theory and Applications, p. 297, 1977.
[7] Powell, M.J.D., "A Fast Algorithm for Nonlinearly Constrained Optimization Calculations," Numerical Analysis, ed. G.A. Watson, Lecture Notes in Mathematics, Springer Verlag, Vol. 630, 1978.
[8] Powell, M.J.D., "The Convergence of Variable Metric Methods For Nonlinearly Constrained Optimization Calculations," Nonlinear Programming 3 (O.L. Mangasarian, R.R. Meyer, and S.M. Robinson, eds.), Academic Press, 1978.
[9] Waltz, R. A., J. L. Morales, J. Nocedal, and D. Orban, "An interior algorithm for nonlinear optimization that combines line search and trust region steps," Mathematical Programming, Vol 107, No. 3, pp. 391–408, 2006.
fminbnd
 fminsearch
 fminunc
 optimoptions
 optimtool