Chasing what is wrong with 'dual-simplex-highs' in linprog
Show older comments
I try to see why 'dual-simplex-highs' algorithm fails and 'dual-simplex-legacy' works OK on this specific LP problem of size 467.
The linear programming involves only linear equality constraints, and lower bounds x >= 0 on some components of x (but not all of them).
The Aeq size is 211 x 467 and the condion number is not high at all IMO (about 10). So I consider it is not a difficult problem numerically (?).
'dual-simplex-legacy' able to find the solution, however not the default algorithm 'dual-simplex-highs', the output does not help much what is wrong.
Can someone tell me where I could investigate further to the cause?
load('linprog_test.mat')
size(Aeq)
cond(full(Aeq))
linprogopt = optimset('Algorithm', 'dual-simplex-legacy');
[lpsol, ~, exitflag, out] = linprog(c, [], [], Aeq, beq, LB, UB, linprogopt)
linprogopt = optimset('Algorithm', 'dual-simplex-highs');
[lpsol, ~, exitflag, out] = linprog(c, [], [], Aeq, beq, LB, UB, linprogopt)
8 Comments
Bruno Luong
on 13 Sep 2024
Edited: Bruno Luong
on 13 Sep 2024
Bruno Luong
on 16 Sep 2024
Edited: Bruno Luong
on 16 Sep 2024
Julian
on 18 Oct 2024
The new default 'dual-simplex-highs' algorithm used by linprog/intlinprog is certainly not flawed or buggy in general, as it drives the HiGHS mixed-integer programming solver that requires vast numbers of LPs to be solved and is justifiably very popular. This particular instance contains a very large range of values in the constraint matrix - even after the tiny values have been ignored. You can't have values of 1e-40 being meaningful relative to values of order one when solving LPs using double precision arithmetic. This isn't to say that HiGHS can't cope with a very large range of values in the constraint matrix. You're just increasing the probability of numerical failure.
Bruno Luong
on 19 Oct 2024
Edited: Bruno Luong
on 19 Oct 2024
Julian
on 20 Oct 2024
I’m not saying that your LP is numerically challenging to solve: I know that it isn’t, so have no reason to doubt your statement that the matrix condition is about 10. Leaving aside your problem’s LP condition (which is not the same as its constraint matrix condition) omitting the very small values in your matrix (HiGHS drops 1176 |values| in [1.20981e-70, 9.7101e-10]) won’t affect the numerical linear algebra. Hence, in this sense, the small values aren’t meaningful.
If the constraint matrix of an LP has small values that are meaningful – because they lead to a correspondingly large matrix condition number and sensitivity in the optimal solution – then solvers may struggle to solve the problem. For values of “1e-40”, this could be hopelessly hard. Hence small numbers are dropped by LP solvers to reduce the risk of trying to solve a problem with excessive constraint matrix condition – and if the problem was well conditioned anyway, the small values are safe to drop.
For problems like yours, where the largest matrix entries are of order unity and there are many small values (even after dropping excessively small values) the default scaling scheme in the HiGHS LP solver can result in small values being scaled up significantly in the LP that is solved internally – before scaling is removed and the solution cleaned up if necessary. It was my analysis of this that informed Derya’s response.
LP conditioning goes beyond the conditioning of the constraint matrix. Since an LP with a perfectly conditioned constraint matrix can have a non-unique solution – a situation that’s obviously not possible with a well-conditioned linear system of equations – small changes in the objective, RHS and matrix coefficients can lead to significant changes in the optimal solution.
Incorporating the HiGHS solvers as default in the Mathworks optimization toolbox is a significant enhancement in general, and has been done with due consideration by Mathworks. They wouldn’t be advocating solvers that are “flawed, if not buggy”.
Bruno Luong
on 21 Oct 2024
Edited: Bruno Luong
on 21 Oct 2024
This seems like a more applicable test of whether the problem is well-conditioned. There does appear to be some sensitivity to it, though of course it may depend on epsilon
load('linprog_test.mat')
epsilon=1e-6;
linprogopt = optimoptions('linprog','Display','none','Algorithm','dual-simplex-legacy');
[lpsol0, ~, exitflag, out] = linprog(c, [], [], Aeq, beq, LB, UB, linprogopt);
size(lpsol0)
dev=inf(1,10);
for i=1:numel(dev)
[lpsol,~,exitflag] = linprog(c+randn(size(c))*epsilon, [], [], Aeq, beq, LB, UB, linprogopt);
if exitflag~=1, continue; end
dev(i)=norm(lpsol-lpsol0,inf);
end
dev
Accepted Answer
More Answers (2)
It doesn't like your super-small Aeq values.
load('linprog_test.mat')
Aeq(abs(Aeq)<1e-8)=0;
linprogopt = optimset('Algorithm', 'dual-simplex-highs');
[lpsol, ~, exitflag, out] = linprog(c, [], [], Aeq, beq, LB, UB, linprogopt)
intlinprog offers some additional diagnostic output -
load('linprog_test.mat')
intlinprog(c,[], [], [], Aeq, beq, LB, UB)
1 Comment
Bruno Luong
on 13 Sep 2024
Edited: Bruno Luong
on 13 Sep 2024
Categories
Find more on Solver Outputs and Iterative Display in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!