fmincon: optimize till nonlinear condition is false
Show older comments
I am trying to find the maximum in a nonlinear function that depends on a nonlinear condition.
In the plot underneath, the function is plotted at the top and the corresponding value of the condition is plotted at the bottom. The gray area in the lower subplot represents the region where the condition is satisfied. The blue point represents the starting point x0 of the optimization, the green point the determined optimum, where the condition is also satisfied.

Actually, however, the optimum should lie within the magenta colored lines, because the condition is not satisfied for a short time between the starting point and the determined optimum at about x=0.6.
But the optimization apparently just skips this interval.
Is there an option for fmincon or other optimizers to define this or maybe to ask the optimizer to go on till the nonlinear condition is false?
I have already tried to adjust the StepTolerance and FiniteDifferenceStepSize and several different optimizer but without success.
Adjusting the constraints shall not be the solution for this.
Edit:
So after discussion, the easiest way to come by was to implement a steepest descend procedure which just calculates the gradients "sk" at the current point and adjusting the next step given a maximum stepsize.
For each given next point "xplus = x + sk" I check if one of several stop criterions (also my condition function value) are reached.
This works for me so far and it reaches my optimum in the cases from 1 to 8 dimensional function with a 1 dimensional output.
I know there would be a lot more to consider.
For everyone who is interested, I attached my function steepestDescent.m and I appreciate any suggestions for improvements.
-> I know this is computationally intensive but for my case I do not need any smart computational savings
function [x, k, stoptype] = steepesDescent(F,x0,maxStepsize,lb,ub,condfnc)
x = x0;
k = 1;
while 1
% calculate jacobi/gradients
J=jacobi(@(x)F(x),x);
sk = J'.*-1;
sk(sk < maxStepsize) = -maxStepsize;
sk(sk > -maxStepsize) = maxStepsize;
xplus=x+sk;
xplus(xplus < lb) = lb(xplus < lb);
xplus(xplus > ub) = ub(xplus > ub);
if condfnc(xplus) > 0
disp('abort: condition > 0')
disp(['steps: ' num2str(k)])
stoptype = 'condition > 0';
return;
end
% if k>1 && ~isequal(sign(prevJ),sign(J))
if k>1 && norm(x0-xplus) < norm(x0-x)
disp('search direction turned');
disp(['steps: ' num2str(k)])
stoptype = 'search direction turned';
return;
end
if (norm(x-xplus)<1e-8)
disp('procedure stagnates');
disp(['steps: ' num2str(k)])
stoptype = 'procedure stagnates';
return;
end
if k >= 100
disp('maximum steps reached');
disp(['steps: ' num2str(k)])
stoptype = 'maximum steps reached';
return;
end
x = xplus;
k = k + 1;
end
function [J] = jacobi(f,x)
e_mach=10^(-16);
e=eye(length(x));
% h=10^(-8);
x_typ=ones(length(x),1);
J=zeros(length(x(1,:)),length(x(:,1)));
for j=1:length(x)
if x(j)>=0
h_tilde=sqrt(e_mach)*max(norm(x(j)),x_typ(j));
else
h_tilde=-sqrt(e_mach)*max(norm(x(j)),x_typ(j));
end
x_delta=x(j)+h_tilde;
h=x_delta-x(j);
J(:,j)=(f(x+h*e(:,j))-f(x-h*e(:,j)))/(2*h);
end
end
end
2 Comments
Walter Roberson
on 19 Oct 2023
Whatever that second plot represents: can its current value calculated given the optimization variables?
The condition appears to be satisfied near 0.3: why isn't the first place that the condition is satisfied considered to be the optimum?
The function value decreases. The condition appears to be satisfied near 1.9, which is also the lowest function value. Why isn't the optimum considered to be there, the lowest function value that also satisfies the condition?
If the magenta lines are lower bound and upper bound on a particular parameter, then why is the starting point over near 0.3, and not somewhere within the magenta lines?
Philipp Seitz
on 19 Oct 2023
Answers (4)
Is there an option for fmincon or other optimizers to define this or maybe to ask the optimizer to go on till the nonlinear condition is false?
No. What matters for optimizers is that the constraints are satisfied in the actual point. The way how this point was reached from the starting point and whether the constraint was satisfied or not on the way to the actual position cannot be taken into account.
3 Comments
Philipp Seitz
on 19 Oct 2023
Torsten
on 19 Oct 2023
I still don't fully understand the underlying problem.
The requirement is that the constraint evaluated between the starting point and the final point is satisfied throughout ? Thus in higher dimensions this would mean that the iterations remain in a sphere around the initial point in which the constraint is satisfied ?
Philipp Seitz
on 19 Oct 2023
John D'Errico
on 19 Oct 2023
1 vote
It does not just skip" the interval. It probably never saw it. An optimizer does not plot your function. It has no clue what it looks like. If it never samples a point in that interval, then it never finds that local solution.
Think of an optimizer as a blind person, given nothing more than a cane, and an altimeter. Maybe scuba gear if necessary. And a good insurance policy is provided to the descendents of our intrepid explorer too. (Disclaimer: No blind people were harmed in this thought experiment.) But if all it can do is look in the local area it starts in, then it can only do its best to find the best spot it can. It does not understand your function in the slightest.
As far as your problem goes, you need to provide the magenta lines as inequality constraints. And then best, you want to start it between those points.
2 Comments
Philipp Seitz
on 19 Oct 2023
John D'Errico
on 20 Oct 2023
Your comments are far too vague to know what you are doing, or to give you a good answer. You need to define a set of constraints that indicate where the solution is allowed to live. Somehow, someway, that is what you need to do. Since you will tell us nothing more clear than what you have done, I can tell you no more than that.
No. You cannot define a maximum step size to solve the probem.
yes but I maybe could define a maximum stepsize which might be smaller than this intervall so the optimizer cannot accidentally jump over this intevall?
It's a hacky thing to do, but in any of Matlab's optimization algorithms (e.g., interior point) which allow you to provide the Hessian H (or a HessianMultiplyFunction), you could provide instead
where s>0. Increasing s will have the effect of damping the step sizes. Of course, it will also make the step direction more and more like steepest descent, which would impact speed of convergence. Another alternative might be to provide
for s>1.
5 Comments
Philipp Seitz
on 19 Oct 2023
Matt J
on 19 Oct 2023
Maybe eventually the easiest way is to just go the simple way and work with steepest descent.
You could be waiting a long time for convergence... Also steepest descent can take large steps as well. You wouldn't necessarily avoid the need for damping just by switching to steepest descent.
Walter Roberson
on 19 Oct 2023
Is that steepest descent on the negative of the function to be maximized... or is it steepest ascent on the original of the function to be maximized?
Philipp Seitz
on 19 Oct 2023
Walter Roberson
on 19 Oct 2023
If you are implementing your own steepest -whatever- algorithm then you might as well go for ascent since you are maximizing. But if you are using already-existing functions, then descent is generally the way already programmed.
Though if you use Problem Based Optimization, you can describe the objective function and then indicate whether it is to be maximized or minimized and Problem Based Optimization would take care of the details.
Adjusting the constraints shall not be the solution for this.
Instead of a hard constraint, you could add a penalty term to your obejctive function to discourage the optimization from going arbitrarily far from your initial point x0, e.g.,
where
is a penalty weight.
where Categories
Find more on Nonlinear Optimization in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!