Main Content

fcn2optimexpr

Convert function to optimization expression

Description

[out1,out2,...,outN] = fcn2optimexpr(fcn,in1,in2,...,inK) converts the function fcn(in1,in2,...,inK) to an optimization expression with N outputs.

example

[out1,out2,...,outN] = fcn2optimexpr(fcn,in1,in2,...,inK,Name,Value) specifies additional options using one or more name-value pair arguments. For example, you can save a function evaluation by passing OutputSize.

example

Examples

collapse all

To use a MATLAB® function in the problem-based approach when it is not composed of supported functions, first convert it to an optimization expression. See Supported Operations for Optimization Variables and Expressions and Convert Nonlinear Function to Optimization Expression.

To use the objective function gamma (the mathematical function Γ(x), an extension of the factorial function), create an optimization variable x and use it in a converted anonymous function.

x = optimvar('x');
obj = fcn2optimexpr(@gamma,x);
prob = optimproblem('Objective',obj);
show(prob)
  OptimizationProblem : 

	Solve for:
       x

	minimize :
       gamma(x)

To solve the resulting problem, give an initial point structure and call solve.

x0.x = 1/2;
sol =  solve(prob,x0)
Solving problem using fminunc.

Local minimum found.

Optimization completed because the size of the gradient is less than
the value of the optimality tolerance.

<stopping criteria details>
sol = struct with fields:
    x: 1.4616

For more complex functions, convert a function file. The function file gammabrock.m computes an objective of two optimization variables.

type gammabrock
function f = gammabrock(x,y)
f = (10*(y - gamma(x)))^2 + (1 - x)^2;

Include this objective in a problem.

x = optimvar('x','LowerBound',0);
y = optimvar('y');
obj = fcn2optimexpr(@gammabrock,x,y);
prob = optimproblem('Objective',obj);
show(prob)
  OptimizationProblem : 

	Solve for:
       x, y

	minimize :
       gammabrock(x, y)


	variable bounds:
       0 <= x

The gammabrock function is a sum of squares. You get a more efficient problem formulation by expressing the function as an explicit sum of squares of optimization expressions.

f = fcn2optimexpr(@(x,y)y - gamma(x),x,y);
obj2 = (10*f)^2 + (1-x)^2;
prob2 = optimproblem('Objective',obj2);

To see the difference in efficiency, solve prob and prob2 and examine the difference in the number of iterations.

x0.x = 1/2;
x0.y = 1/2;
[sol,fval,~,output] = solve(prob,x0);
Solving problem using fmincon.

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

<stopping criteria details>
[sol2,fval2,~,output2] = solve(prob2,x0);
Solving problem using lsqnonlin.

Local minimum found.

Optimization completed because the size of the gradient is less than
the value of the optimality tolerance.

<stopping criteria details>
fprintf('prob took %d iterations, but prob2 took %d iterations\n',output.iterations,output2.iterations)
prob took 21 iterations, but prob2 took 2 iterations

If your function has several outputs, you can use them as elements of the objective function. In this case, u is a 2-by-2 variable, v is a 2-by-1 variable, and expfn3 has three outputs.

type expfn3
function [f,g,mineval] = expfn3(u,v)
mineval = min(eig(u));
f = v'*u*v;
f = -exp(-f);
t = u*v;
g = t'*t + sum(t) - 3;

Create appropriately sized optimization variables, and create an objective function from the first two outputs.

u = optimvar('u',2,2);
v = optimvar('v',2);
[f,g,mineval] = fcn2optimexpr(@expfn3,u,v);
prob = optimproblem;
prob.Objective = f*g/(1 + f^2);
show(prob)
  OptimizationProblem : 

	Solve for:
       u, v

	minimize :
       ((arg2 .* arg3) ./ (1 + arg1.^2))

       where:

           [arg1,~,~] = expfn3(u, v);
           [arg2,~,~] = expfn3(u, v);
           [~,arg3,~] = expfn3(u, v);

You can use the mineval output in a subsequent constraint expression.

In problem-based optimization, constraints are two optimization expressions with a comparison operator (==, <=, or >=) between them. You can use fcn2optimexpr to create one or both optimization expressions. See Convert Nonlinear Function to Optimization Expression.

Create the nonlinear constraint that gammafn2 is less than or equal to –1/2. This function of two variables is in the gammafn2.m file.

type gammafn2
function f = gammafn2(x,y)
f = -gamma(x)*(y/(1+y^2));

Create optimization variables, convert the function file to an optimization expression, and then express the constraint as confn.

x = optimvar('x','LowerBound',0);
y = optimvar('y','LowerBound',0);
expr1 = fcn2optimexpr(@gammafn2,x,y);
confn = expr1 <= -1/2;
show(confn)
  gammafn2(x, y) <= -0.5

Create another constraint that gammafn2 is greater than or equal to x + y.

confn2 = expr1 >= x + y;

Create an optimization problem and place the constraints in the problem.

prob = optimproblem;
prob.Constraints.confn = confn;
prob.Constraints.confn2 = confn2;
show(prob)
  OptimizationProblem : 

	Solve for:
       x, y

	minimize :

	subject to confn:
       gammafn2(x, y) <= -0.5

	subject to confn2:
       gammafn2(x, y) >= (x + y)

	variable bounds:
       0 <= x

       0 <= y

If your problem involves a common, time-consuming function to compute the objective and nonlinear constraint, you can save time by using the ReuseEvaluation name-value argument. The rosenbrocknorm function computes both the Rosenbrock objective function and the norm of the argument for use in the constraint x24.

type rosenbrocknorm
function [f,c] = rosenbrocknorm(x)
pause(1) % Simulates time-consuming function
c = dot(x,x);
f = 100*(x(2) - x(1)^2)^2 + (1 - x(1))^2;

Create a 2-D optimization variable x. Then convert rosenbrocknorm to an optimization expression by using fcn2optimexpr and set the ReuseEvaluation name-value argument to true. To ensure that fcn2optimexpr keeps the pause statement, set the Analysis name-value argument to 'off'.

x = optimvar('x',2);
[f,c] = fcn2optimexpr(@rosenbrocknorm,x,...
    'ReuseEvaluation',true,'Analysis','off');

Create objective and constraint expressions from the returned expressions. Include the objective and constraint expressions in an optimization problem. Review the problem using show.

prob = optimproblem('Objective',f);
prob.Constraints.cineq = c <= 4;
show(prob)
  OptimizationProblem : 

	Solve for:
       x

	minimize :
       [argout,~] = rosenbrocknorm(x)


	subject to cineq:
       arg_LHS <= 4

       where:

         [~,arg_LHS] = rosenbrocknorm(x);
     

Solve the problem starting from the initial point x0.x = [-1;1], timing the result.

x0.x = [-1;1];
tic
[sol,fval,exitflag,output] = solve(prob,x0)
Solving problem using fmincon.

Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in 
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.

<stopping criteria details>
sol = struct with fields:
    x: [2×1 double]

fval = 4.5793e-11
exitflag = 
    OptimalSolution

output = struct with fields:
              iterations: 44
               funcCount: 164
         constrviolation: 0
                stepsize: 4.3124e-08
               algorithm: 'interior-point'
           firstorderopt: 5.1691e-07
            cgiterations: 10
                 message: 'Local minimum found that satisfies the constraints.↵↵Optimization completed because the objective function is non-decreasing in ↵feasible directions, to within the value of the optimality tolerance,↵and constraints are satisfied to within the value of the constraint tolerance.↵↵<stopping criteria details>↵↵Optimization completed: The relative first-order optimality measure, 5.169074e-07,↵is less than options.OptimalityTolerance = 1.000000e-06, and the relative maximum constraint↵violation, 0.000000e+00, is less than options.ConstraintTolerance = 1.000000e-06.'
            bestfeasible: [1×1 struct]
     objectivederivative: "finite-differences"
    constraintderivative: "finite-differences"
                  solver: 'fmincon'

toc
Elapsed time is 165.623157 seconds.

The solution time in seconds is nearly the same as the number of function evaluations. This result indicates that the solver reused function values, and did not waste time by reevaluating the same point twice.

For a more extensive example, see Objective and Constraints Having a Common Function in Serial or Parallel, Problem-Based. For more information on using fcn2optimexpr, see Convert Nonlinear Function to Optimization Expression.

Input Arguments

collapse all

Function to convert, specified as a function handle.

Example: @sin specifies the sine function.

Data Types: function_handle

Input argument, specified as a MATLAB variable. The input can have any data type and any size. You can include any problem variables or constant data in the input argument in; see Pass Extra Parameters in Problem-Based Approach.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical | char | string | struct | table | cell | function_handle | categorical | datetime | duration | calendarDuration | fi
Complex Number Support: Yes

Name-Value Arguments

collapse all

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: [out1,out2] = fcn2optimexpr(@fun,x,y,'OutputSize',[1,1],'ReuseEvaluation',true) specifies that out1 and out2 are scalars that a solver will reuse between objective and constraint functions without recalculation.

Indication to analyze the function fcn, specified as "on" or "off". The software follows the steps in fcn2optimexpr Algorithm Description to attempt to use the fastest and most appropriate solver and algorithms.

Static analysis, the first thing the software tries when Analysis is "on", is described in Static Analysis of Optimization Expressions.

Analyzing the function determines whether fcn consists entirely of supported operations (see Supported Operations for Optimization Variables and Expressions). If so, the software can use automatic differentiation.

If you want fcn2optimexpr not to analyze fcn and, therefore, to treat fcn as a black box without automatic differentiation, specify "off". In this case, solve uses only a nonlinear solver such as fmincon or ga, not a linear or quadratic solver such as linprog or quadprog.

For more information about the effects of Analysis, see Limitations.

Example: [out1,out2] = fcn2optimexpr(@fun,x,"Analysis","off")

Data Types: char | string

Report function analysis details, specified as "off" (do not report) or "on" (report). If Analysis is "off", there is nothing to report.

Example: [out1,out2] = fcn2optimexpr(@fun,x,"Display","on")

Data Types: char | string

Size of the output expressions, specified as:

  • An integer vector — If the function has one output out1, OutputSize specifies the size of out1. If the function has multiple outputs out1,…,outN, OutputSize specifies that all outputs have the same size.

  • A cell array of integer vectors — The size of output outj is the jth element of OutputSize.

Note

A scalar has size [1,1].

If you do not specify the 'OutputSize' name-value pair argument, then fcn2optimexpr passes data to fcn in order to determine the size of the outputs (see Algorithms). By specifying 'OutputSize', you enable fcn2optimexpr to skip this step, which saves time. Also, if you do not specify 'OutputSize' and the evaluation of fcn fails for any reason, then fcn2optimexpr fails as well.

Example: [out1,out2,out3] = fcn2optimexpr(@fun,x,'OutputSize',[1,1]) specifies that the three outputs [out1,out2,out3] are scalars.

Example: [out1,out2] = fcn2optimexpr(@fun,x,'OutputSize',{[4,4],[3,5]}) specifies that out1 has size 4-by-4 and out2 has size 3-by-5.

Data Types: double | cell

Indicator to reuse values, specified as false (do not reuse) or true (reuse).

Note

ReuseEvaluation may not have an effect when Analysis="on".

ReuseEvaluation is not supported in a thread-based parallel pool.

ReuseEvaluation can make your problem run faster when, for example, the objective and some nonlinear constraints rely on a common calculation. In this case, the solver stores the value for reuse wherever needed and avoids recalculating the value.

Reusable values involve some overhead, so it is best to enable reusable values only for expressions that share a value.

Example: [out1,out2,out3] = fcn2optimexpr(@fun,x,"ReuseEvaluation",true,"Analysis","off") allows out1, out2, and out3 to be used in multiple computations, with the outputs being calculated only once per evaluation point.

Data Types: logical

Output Arguments

collapse all

Output argument, returned as an OptimizationExpression. The size of the expression depends on the input function.

Limitations

Analysis Can Ignore Noncomputational Functions

  • The Analysis algorithm might not include noncomputational functions. This aspect of the algorithm can result in the following:

    • pause statements are ignored.

    • A global variable that does not affect the results can be ignored. For example, if you use a global variable to count how many times the function runs, then you might obtain a misleading count.

    • If the function contains a call to rand or rng, the function might execute the first call only, and future calls do not set the random number stream.

    • A plot call might not update a figure at all iterations.

    • Saving data to a mat file or text file might not occur at every iteration.

  • To ensure that noncomputational functions operate as you expect, set the Analysis name-value argument to "off".

For more information, see Limitations of Static Analysis.

Algorithms

collapse all

Version History

Introduced in R2019a