## Linear Programming Algorithms

### Linear Programming Definition

Linear programming is the problem of finding a vector *x* that
minimizes a linear function *f ^{T}x*
subject to linear constraints:

$$\underset{x}{\mathrm{min}}{f}^{T}x$$

such that one or more of the following hold:

*A·x* ≤ *b*

*Aeq·x* = *beq*

*l* ≤ *x* ≤ *u*.

### Interior-Point `linprog`

Algorithm

The `linprog`

`'interior-point'`

algorithm is very similar to the interior-point-convex quadprog Algorithm. It also shares many features with the `linprog`

`'interior-point-legacy'`

algorithm. These algorithms have the same
general outline:

Presolve, meaning simplification and conversion of the problem to a standard form.

Generate an initial point. The choice of an initial point is especially important for solving interior-point algorithms efficiently, and this step can be time-consuming.

Predictor-corrector iterations to solve the KKT equations. This step is generally the most time-consuming.

#### Presolve

The algorithm first tries to simplify the problem by removing redundancies and simplifying constraints. The tasks performed during the presolve step can include the following:

Check if any variables have equal upper and lower bounds. If so, check for feasibility, and then fix and remove the variables.

Check if any linear inequality constraint involves only one variable. If so, check for feasibility, and then change the linear constraint to a bound.

Check if any linear equality constraint involves only one variable. If so, check for feasibility, and then fix and remove the variable.

Check if any linear constraint matrix has zero rows. If so, check for feasibility, and then delete the rows.

Determine if the bounds and linear constraints are consistent.

Check if any variables appear only as linear terms in the objective function and do not appear in any linear constraint. If so, check for feasibility and boundedness, and then fix the variables at their appropriate bounds.

Change any linear inequality constraints to linear equality constraints by adding slack variables.

If the algorithm detects an infeasible or unbounded problem, it halts and issues an appropriate exit message.

The algorithm might arrive at a single feasible point, which represents the solution.

If the algorithm does not detect an infeasible or unbounded problem in the presolve step, and if the presolve has not produced the solution, the algorithm continues to its next steps. After reaching a stopping criterion, the algorithm reconstructs the original problem, undoing any presolve transformations. This final step is the postsolve step.

For simplicity, if the problem is not solved in the presolve step, the algorithm shifts all finite lower bounds to zero.

#### Generate Initial Point

To set the initial point, `x0`

, the algorithm does the
following.

Initialize

`x0`

to`ones(n,1)`

, where`n`

is the number of elements of the objective function vector*f*.Convert all bounded components to have a lower bound of 0. If component

`i`

has a finite upper bound`u(i)`

, then`x0(i) = u/2`

.For components that have only one bound, modify the component if necessary to lie strictly inside the bound.

To put

`x0`

close to the central path, take one predictor-corrector step, and then modify the resulting position and slack variables to lie well within any bounds. For details of the central path, see Nocedal and Wright [8], page 397.

#### Predictor-Corrector

Similar to the `fmincon`

interior-point
algorithm, the `interior-point`

algorithm tries to
find a point where the Karush-Kuhn-Tucker
(KKT) conditions hold. To describe these equations for the linear
programming problem, consider the standard form of the linear programming
problem after preprocessing:

$$\underset{x}{\mathrm{min}}{f}^{T}x\text{subjectto}\{\begin{array}{c}\overline{A}x=\overline{b}\\ x+t=u\\ x,t\ge 0.\end{array}$$

Assume for now that all variables have at least one finite bound. By shifting and negating components, if necessary, this assumption means that all

*x*components have a lower bound of 0.$$\overline{A}$$ is the extended linear matrix that includes both linear inequalities and linear equalities. $$\overline{b}$$ is the corresponding linear equality vector. $$\overline{A}$$ also includes terms for extending the vector

*x*with slack variables*s*that turn inequality constraints to equality constraints:$$\overline{A}x=\left(\begin{array}{cc}{A}_{eq}& 0\\ A& I\end{array}\right)\left(\begin{array}{c}{x}_{0}\\ s\end{array}\right),$$

where

*x*_{0}means the original*x*vector.*t*is the vector of slacks that convert upper bounds to equalities.

The Lagrangian for this system involves the following vectors:

*y*, Lagrange multipliers associated with the linear equalities*v*, Lagrange multipliers associated with the lower bound (positivity constraint)*w*, Lagrange multipliers associated with the upper bound

The Lagrangian is

$$L={f}^{T}x-{y}^{T}\left(\overline{A}x-\overline{b}\right)-{v}^{T}x-{w}^{T}\left(u-x-t\right).$$

Therefore, the KKT conditions for this system are

$$\begin{array}{c}f-{\overline{A}}^{T}y-v+w=0\\ \overline{A}x=\overline{b}\\ x+t=u\\ {v}_{i}{x}_{i}=0\\ {w}_{i}{t}_{i}=0\\ (x,v,w,t)\ge 0.\end{array}$$

The `linprog`

algorithm uses a different sign convention for the returned Lagrange multipliers than this discussion gives. This discussion uses the same sign as most literature. See `lambda`

.

The algorithm first predicts a step from the Newton-Raphson formula, and then
computes a corrector step. The corrector attempts to reduce the residual in the
nonlinear complementarity equations *s _{i}z_{i}* = 0. The Newton-Raphson step is

$$\left(\begin{array}{ccccc}0& -{\overline{A}}^{T}& 0& -I& I\\ \overline{A}& 0& 0& 0& 0\\ -I& 0& -I& 0& 0\\ V& 0& 0& X& 0\\ 0& 0& W& 0& T\end{array}\right)\left(\begin{array}{c}\Delta x\\ \Delta y\\ \Delta t\\ \begin{array}{l}\Delta v\\ \Delta w\end{array}\end{array}\right)=-\left(\begin{array}{c}f-{\overline{A}}^{T}y-v+w\\ \overline{A}x-\overline{b}\\ u-x-t\\ \begin{array}{l}VX\\ WT\end{array}\end{array}\right)=-\left(\begin{array}{c}{r}_{d}\\ {r}_{p}\\ {r}_{ub}\\ \begin{array}{l}{r}_{vx}\\ {r}_{wt}\end{array}\end{array}\right),$$ | (1) |

where *X*, *V*, *W*, and
*T* are diagonal matrices corresponding to the vectors
*x*, *v*, *w*, and
*t* respectively. The residual vectors on the far right
side of the equation are:

*r*, the dual residual_{d}*r*, the primal residual_{p}*r*, the upper bound residual_{ub}*r*, the lower bound complementarity residual_{vx}*r*, the upper bound complementarity residual_{wt}

The iterative display reports these quantities:

$$\begin{array}{l}\text{Primalinfeasibility}={\Vert {r}_{p}\Vert}_{1}+{\Vert {r}_{ub}\Vert}_{1}\\ \text{Dualinfeasibility}={\Vert {r}_{d}\Vert}_{\infty}.\end{array}$$

To solve Equation 1, first convert it to the symmetric matrix form

$$\left(\begin{array}{cc}-D& {\overline{A}}^{T}\\ \overline{A}& 0\end{array}\right)\left(\begin{array}{c}\Delta x\\ \Delta y\end{array}\right)=-\left(\begin{array}{c}R\\ {r}_{p}\end{array}\right),$$ | (2) |

where

$$\begin{array}{l}D={X}^{-1}V+{T}^{-1}W\\ R=-{r}_{d}-{X}^{-1}{r}_{vx}+{T}^{-1}{r}_{wt}+{T}^{-1}W{r}_{ub}.\end{array}$$

All the matrix inverses in the definitions of *D* and
*R* are simple to compute because the matrices are
diagonal.

To derive Equation 2 from Equation 1, notice that the second row of Equation 2 is the same as the second matrix row of
Equation 1. The first row of Equation 2 comes from solving the last two rows of
Equation 1 for Δ*v* and
Δ*w*, and then solving for Δ*t*.

Equation 2 is symmetric, but it is not positive
definite because of the –*D* term. Therefore, you cannot solve
it using a Cholesky factorization. A few more steps lead to a different equation
that is positive definite, and hence can be solved efficiently by Cholesky
factorization.

The second set of rows of Equation 2 is

$$\overline{A}\Delta x=-{r}_{p}$$

and the first set of rows is

$$-D\Delta x+{\overline{A}}^{T}\Delta y=-R.$$

Substituting

$$\Delta x={D}^{-1}{\overline{A}}^{T}\Delta y+{D}^{-1}R$$

gives

$$\overline{A}{D}^{-1}{\overline{A}}^{T}\Delta y=-\overline{A}{D}^{-1}R-{r}_{p}.$$ | (3) |

Usually, the most efficient way to find the Newton step is to solve Equation 3 for Δ*y* using Cholesky
factorization. Cholesky factorization is possible because the matrix multiplying
Δ*y* is obviously symmetric and, in the absence of
degeneracies, is positive definite. Afterward, to find the Newton step, back
substitute to find Δ*x*, Δ*t*,
Δ*v*, and Δ*w*. However, when $$\overline{A}$$ has a dense column, it can be more efficient to solve Equation 2 instead. The `linprog`

interior-point algorithm chooses the solution algorithm based on the density of
columns.

For more algorithm details, see Mehrotra [7].

After calculating the corrected Newton step, the algorithm performs more calculations to get both a longer current step, and to prepare for better subsequent steps. These multiple correction calculations can improve both performance and robustness. For details, see Gondzio [5].

The predictor-corrector algorithm is largely the same as the full
`quadprog`

`'interior-point-convex'`

version, except for the quadratic
terms. See Full Predictor-Corrector.

#### Stopping Conditions

The predictor-corrector algorithm iterates until it reaches a point that is feasible (satisfies the constraints to within tolerances) and where the relative step sizes are small. Specifically, define

$$\rho =\mathrm{max}\left(1,\Vert \overline{A}\Vert ,\Vert f\Vert ,\Vert \overline{b}\Vert \right).$$

The algorithm stops when all of these conditions are satisfied:

$$\begin{array}{c}{\Vert {r}_{p}\Vert}_{1}+{\Vert {r}_{ub}\Vert}_{1}\le \rho \text{TolCon}\\ {\Vert {r}_{d}\Vert}_{\infty}\le \rho \text{TolFun}\\ {r}_{c}\le \text{TolFun,}\end{array}$$

where

$${r}_{c}=\underset{i}{\mathrm{max}}\left(\mathrm{min}\left(\left|{x}_{i}{v}_{i}\right|,\left|{x}_{i}\right|,\left|{v}_{i}\right|\right),\mathrm{min}\left(\left|{t}_{i}{w}_{i}\right|,\left|{t}_{i}\right|,\left|{w}_{i}\right|\right)\right).$$

*r _{c}* essentially measures the size of
the complementarity residuals

*xv*and

*tw*, which are each vectors of zeros at a solution.

### Interior-Point-Legacy Linear Programming

#### Introduction

The interior-point-legacy method is based on LIPSOL ([52]), which is a variant of Mehrotra's predictor-corrector algorithm ([47]), a primal-dual interior-point method.

#### Main Algorithm

The algorithm begins by applying a series of preprocessing steps (see Preprocessing). After preprocessing, the problem has the form

$$\underset{x}{\mathrm{min}}{f}^{T}x\text{suchthat}\{\begin{array}{c}A\cdot x=b\\ 0\le x\le u.\end{array}$$ | (4) |

The upper bounds constraints are implicitly included in the constraint matrix
*A*. With the addition of primal slack variables
*s*, Equation 4 becomes

$$\underset{x}{\mathrm{min}}{f}^{T}x\text{suchthat}\{\begin{array}{c}A\cdot x=b\\ x+s=u\\ x\ge 0,\text{}s\ge 0.\end{array}$$ | (5) |

which is referred to as the *primal* problem:
*x* consists of the primal variables and
*s* consists of the primal slack variables. The
*dual* problem is

$$\mathrm{max}{b}^{T}y-{u}^{T}w\text{suchthat}\{\begin{array}{c}{A}^{T}\cdot y-w+z=f\\ z\ge 0,\text{}w\ge 0,\end{array}$$ | (6) |

where *y* and *w* consist of the dual
variables and* z* consists of the dual slacks. The optimality conditions for this linear program, i.e., the primal
Equation 5 and the dual Equation 6, are

$$\begin{array}{l}F(x,y,z,s,w)=\left(\begin{array}{c}A\cdot x-b\\ x+s-u\\ {A}^{T}\cdot y-w+z-f\\ {x}_{i}{z}_{i}\\ {s}_{i}{w}_{i}\end{array}\right)=0,\\ \text{}x\ge 0,\text{}z\ge 0,\text{}s\ge 0,\text{}w\ge 0,\end{array}$$ | (7) |

where *x _{i}z_{i}*
and

*s*denote component-wise multiplication.

_{i}w_{i}The `linprog`

algorithm uses a different sign convention for the returned Lagrange multipliers than this discussion gives. This discussion uses the same sign as most literature. See `lambda`

.

The quadratic equations *x _{i}z_{i}* = 0 and

*s*= 0 are called the

_{i}w_{i}*complementarity*conditions for the linear program; the other (linear) equations are called the

*feasibility*conditions. The quantity

*x ^{T}z* +

*s*

^{T}wis the *duality gap*, which measures the residual of the
complementarity portion of *F* when (*x,z,s,w*) ≥ 0.

The algorithm is a *primal-dual algorithm*, meaning that both the primal
and the dual programs are solved simultaneously. It can be considered a
Newton-like method, applied to the linear-quadratic system *F*(*x,y,z,s,w*) = 0 in Equation 7, while at the same time keeping the iterates
*x*, *z*, *w*, and
*s* positive, thus the name interior-point method. (The
iterates are in the strictly interior region represented by the inequality
constraints in Equation 5.)

The algorithm is a variant of the predictor-corrector algorithm proposed by Mehrotra. Consider an
iterate *v* = [*x;y;z;s;w*], where [*x;z;s;w*] > 0 First compute the so-called *prediction*
direction

$$\Delta {v}_{p}=-{\left({F}^{T}(v)\right)}^{-1}F(v),$$

which is the Newton direction; then the so-called
*corrector* direction

$$\Delta {v}_{c}=-{\left({F}^{T}(v)\right)}^{-1}F\left(v+\Delta {v}_{p}\right)-\mu \widehat{e},$$

where *μ* > 0 is called the *centering* parameter and must be chosen carefully. $$\widehat{e}$$ is a zero-one vector with the ones corresponding to the
quadratic equations in *F*(*v*), i.e., the
perturbations are only applied to the complementarity conditions, which are all
quadratic, but not to the feasibility conditions, which are all linear. The two
directions are combined with a step length parameter *α* > 0 and update *v* to obtain the new iterate
*v*^{+}:

$${v}^{+}=v+\alpha \left(\Delta {v}_{p}+\Delta {v}_{c}\right),$$

where the step length parameter *α* is chosen so that

*v ^{+}* = [

*x*]

^{+};y^{+};z^{+};s^{+};w^{+}satisfies

[*x ^{+};z^{+};s^{+};w^{+}*]
> 0.

In solving for the preceding predictor/corrector directions, the algorithm
computes a (sparse) direct factorization on a modification of the Cholesky
factors of *A·A ^{T}*. If

*A*has dense columns, it instead uses the Sherman-Morrison formula. If that solution is not adequate (the residual is too large), it performs an LDL factorization of an augmented system form of the step equations to find a solution; see

`ldl`

.The algorithm then loops until the iterates converge. The main stopping criteria is a standard one:

$$\mathrm{max}\left(\frac{\Vert {r}_{b}\Vert}{\mathrm{max}\left(1,\Vert b\Vert \right)},\frac{\Vert {r}_{f}\Vert}{\mathrm{max}\left(1,\Vert f\Vert \right)},\frac{\Vert {r}_{u}\Vert}{\mathrm{max}\left(1,\Vert u\Vert \right)},\frac{\left|{f}^{T}x-{b}^{T}y+{u}^{T}w\right|}{\mathrm{max}\left(1,\left|{f}^{T}x\right|,\left|{b}^{T}y-{u}^{T}w\right|\right)}\right)\le tol,$$

where

$$\begin{array}{c}{r}_{b}=Ax-b\\ {r}_{f}={A}^{T}y-w+z-f\\ {r}_{u}=\left\{x\right\}+s-u\end{array}$$

are the primal residual, dual residual, and upper-bound feasibility
respectively ({*x*} means those *x* with finite upper bounds),
and

$${f}^{T}x-{b}^{T}y+{u}^{T}w$$

is the difference between the primal and dual objective values, and
*tol* is some tolerance. The sum in the stopping criteria measures the total relative errors in the
optimality conditions in Equation 7.

The measure of primal infeasibility is ||*r _{b}*||, and the measure of dual infeasibility is ||

*r*||, where the norm is the Euclidean norm.

_{f}#### Preprocessing

The algorithm first tries to simplify the problem by removing redundancies and simplifying constraints. The tasks performed during the presolve step can include the following:

Check if any variables have equal upper and lower bounds. If so, check for feasibility, and then fix and remove the variables.

Check if any linear inequality constraint involves only one variable. If so, check for feasibility, and then change the linear constraint to a bound.

Check if any linear equality constraint involves only one variable. If so, check for feasibility, and then fix and remove the variable.

Check if any linear constraint matrix has zero rows. If so, check for feasibility, and then delete the rows.

Determine if the bounds and linear constraints are consistent.

Check if any variables appear only as linear terms in the objective function and do not appear in any linear constraint. If so, check for feasibility and boundedness, and then fix the variables at their appropriate bounds.

Change any linear inequality constraints to linear equality constraints by adding slack variables.

If the algorithm detects an infeasible or unbounded problem, it halts and issues an appropriate exit message.

The algorithm might arrive at a single feasible point, which represents the solution.

If the algorithm does not detect an infeasible or unbounded problem in the presolve step, and if the presolve has not produced the solution, the algorithm continues to its next steps. After reaching a stopping criterion, the algorithm reconstructs the original problem, undoing any presolve transformations. This final step is the postsolve step.

For simplicity, the algorithm shifts all lower bounds to zero.

While these preprocessing steps can do much to speed up the iterative part of
the algorithm, if the Lagrange multipliers are required, the preprocessing steps must be
undone since the multipliers calculated during the algorithm are for the
transformed problem, and not the original. Thus, if the multipliers are
*not* requested, this transformation back is not
computed, and might save some time computationally.

### Dual-Simplex-Legacy Algorithm

At a high level, the `linprog`

`'dual-simplex-legacy'`

algorithm essentially performs a simplex
algorithm on the *dual problem*.

The algorithm begins by preprocessing as described in Preprocessing. For details, see Andersen and Andersen [1] and Nocedal and Wright [8], Chapter 13. This preprocessing reduces the original linear programming problem to the form of Equation 4:

$$\underset{x}{\mathrm{min}}{f}^{T}x\text{suchthat}\{\begin{array}{c}A\cdot x=b\\ 0\le x\le u.\end{array}$$

*A* and *b* are transformed versions of the
original constraint matrices. This is the primal problem.

Primal feasibility can be defined in terms of the ^{+}
function

$${x}^{+}=\{\begin{array}{ll}x\hfill & \text{if}x0\hfill \\ 0\hfill & \text{if}x\le 0.\hfill \end{array}$$

The measure of primal infeasibility is

$$\text{Primalinfeasibility}=\sqrt{{\left({\left(\text{lb}-x\right)}^{+}\right)}^{2}+{\left({\left(x-\text{ub}\right)}^{+}\right)}^{2}+{\left({\left(\text{A}x-\text{b}\right)}^{+}\right)}^{2}+{\left|\text{Aeq}x-\text{beq}\right|}^{2}}.$$

As explained in Equation 6, the dual problem is to
find vectors *y* and *w*, and a slack variable
vector *z* that solve

$$\mathrm{max}{b}^{T}y-{u}^{T}w\text{suchthat}\{\begin{array}{c}{A}^{T}\cdot y-w+z=f\\ z\ge 0,\text{}w\ge 0.\end{array}$$

The `linprog`

algorithm uses a different sign convention for the returned Lagrange multipliers than this discussion gives. This discussion uses the same sign as most literature. See `lambda`

.

The measure of dual infeasibility is

$$\text{Dualinfeasibility}={\Vert {A}^{T}y+z-w-f\Vert}_{2}.$$

It is well known (for example, see [8]) that any finite solution of the dual problem gives a solution to the primal problem, and any finite solution of the primal problem gives a solution of the dual problem. Furthermore, if either the primal or dual problem is unbounded, then the other problem is infeasible. And if either the primal or dual problem is infeasible, then the other problem is either infeasible or unbounded. Therefore, the two problems are equivalent in terms of obtaining a finite solution, if one exists. Because the primal and dual problems are mathematically equivalent, but the computational steps differ, it can be better to solve the primal problem by solving the dual problem.

To help alleviate degeneracy (see Nocedal and Wright [8], page 366), the dual simplex algorithm begins by perturbing the objective function.

Phase 1 of the dual simplex algorithm is to find a dual feasible point. The algorithm does this by solving an auxiliary linear programming problem.

During Phase 2, the solver repeatedly chooses an entering variable and a leaving variable. The algorithm chooses a leaving variable according to a technique suggested by Forrest and Goldfarb [3] called dual steepest-edge pricing. The algorithm chooses an entering variable using the variation of Harris’ ratio test suggested by Koberstein [6]. To help alleviate degeneracy, the algorithm can introduce additional perturbations during Phase 2.

The solver iterates, attempting to maintain dual feasibility while reducing primal infeasibility, until the solution to the reduced, perturbed problem is both primal feasible and dual feasible. The algorithm unwinds the perturbations that it introduced. If the solution (to the perturbed problem) is dual infeasible for the unperturbed (original) problem, then the solver restores dual feasibility using primal simplex or Phase 1 algorithms. Finally, the solver unwinds the preprocessing steps to return the solution to the original problem.

### Dual-Simplex-Highs Algorithm

The `'dual-simplex-highs'`

algorithm is based on the HiGHS
open-source software. At a high level, the `linprog`

`'dual-simplex-highs'`

algorithm essentially performs a simplex
algorithm that maintains dual feasibility while iterating toward primal feasibility.
See Primal and Dual Feasibility. For algorithmic
details, see Huangfu and Hall [4], Section
2.

The algorithm begins by preprocessing as described in Preprocessing. For background, see Andersen and Andersen [1] and Nocedal and Wright [8], Chapter 13. This preprocessing typically reduces the original linear programming problem to a smaller problem that is easier to solve.

It is well known (for example, see [8]) that any finite solution of the dual problem gives a solution to the primal problem, and any finite solution of the primal problem gives a solution of the dual problem. Furthermore, if either the primal or dual problem is unbounded, then the other problem is infeasible. And if either the primal or dual problem is infeasible, then the other problem is either infeasible or unbounded. Therefore, the two problems are equivalent in terms of obtaining a finite solution, if one exists. Because the primal and dual problems are mathematically equivalent, but the computational steps differ, it can be better to solve the primal problem by solving the dual problem.

To help alleviate degeneracy (see Nocedal and Wright [8], page 366), the dual simplex algorithm begins by perturbing the objective function coefficients.

Phase 1 of the dual simplex algorithm aims to find a dual feasible point. The algorithm does this by using the Phase 2 algorithm (below) applied to a Phase 1 problem. This problem has the (perturbed) objective cost coefficients and bounds on variables and constraints that are modified as follows. The bounds on single-sided upper (lower) bounded variables and constraints are set to [0, 1] ([–1, 0]), the bounds on fixed variables and equations are set to zero, and the bounds on free variables and constraints are set to [–1000, 1000]. Since all bounds are finite, any basis is made dual feasible by setting the primal variable to the appropriate bound. If the dual value of a (nonbasic) variable is not feasible with respect to the original bounds, then it contributes negatively to the dual objective function value. Observe that the dual objective function is bounded above by zero. When the Phase 1 problem is solved to optimality, if the objective value is zero, then the current point is dual feasible with respect to the original bounds. A negative objective value implies dual infeasibility of the original problem with perturbed objective cost coefficients, and strongly suggests dual infeasibility of the unperturbed original problem. This scenario is assessed by reverting to the unperturbed objective function coefficients and attempting to continue solving the Phase 1 problem from the current dual solution.

The solution for Phase 1 is the initial point for Phase 2 of the main algorithm.

During Phase 2, the solver repeatedly chooses a leaving variable and an entering variable. The algorithm chooses a leaving variable according to a technique suggested by Forrest and Goldfarb [3] called dual steepest-edge pricing. The algorithm chooses an entering variable using the variation of Harris’ ratio test suggested by Koberstein [6]. Any small dual infeasibilities due to rounding are removed by shifting the cost coefficients.

In Phase 2, the dual simplex algorithm, starting at the dual feasible point from Phase 1, is used to solve the original problem. At each iteration, the algorithm tests the optimality condition (primal feasibility) and stops if the current solution is optimal. If the current solution is not optimal then, using Basic and Nonbasic Variables for definitions, the algorithm:

Chooses one variable, called the

*leaving variable*, from the basic variables whose primal value is infeasible.Chooses one variable, called the

*entering variable*, from the nonbasic variables and uses it to replace the leaving variable in the basis.Updates the current primal and dual solutions, and the current objective value.

In what is termed the dual ratio test, the dual value of the leaving variable is increased as much as possible while maintaining dual feasibility. The entering variable is the nonbasic variable whose dual value is zeroed in this limit. Computing the data for the dual ratio requires the solution of one system of equations. After the basis change, a second system of equations must be solved to update the primal solution. If there is no limit in the dual ratio test, then the perturbed problem is dual unbounded (and hence primal infeasible).

The solver iterates, maintaining dual feasibility while reducing primal infeasibility, until the solution to the reduced, perturbed problem is both primal and dual feasible. The algorithm then removes the objective cost perturbations. If the solution (to the perturbed problem) is dual infeasible for the unperturbed (original) problem, then the solver restores dual feasibility using the primal simplex Phase 2 algorithm. Finally, the solver unwinds the preprocessing steps to return the solution to the original problem.

#### Basic and Nonbasic Variables

This section defines the terms *basis*,
*nonbasis*, and *basic feasible
solutions* for a linear programming problem. The definition
assumes that the problem is given in the following standard form:

$$\underset{x}{\mathrm{min}}{f}^{T}x\text{suchthat}\{\begin{array}{c}A\cdot x=b,\\ lb\le x\le ub.\end{array}$$

(Note that *A* and *b* are not the matrix
and vector defining the inequalities in the original problem.) Assume that
*A* is an *m*-by-*n*
matrix, of rank *m* < *n*, whose columns are
{*a*_{1}, *a*_{2}, ..., *a _{n}*}.
Suppose that $$\left\{{a}_{{i}_{1}},{a}_{{i}_{2}},\mathrm{...},{a}_{{i}_{m}}\right\}$$ is a basis for the column space of

*A*, with index set

*B*= {

*i*

_{1},

*i*

_{2}, ...,

*i*}, and that

_{m}*N*= {1, 2, ..., n}\

*B*is the complement of

*B*. The submatrix

*A*

_{B}is called a

*basis*and the complementary submatrix

*A*

_{N}is called a

*nonbasis*. The vector of

*basic variables*is

*x*

_{B }and the vector of

*nonbasic variables*is

*x*

_{N}. At each iteration in phase 2, the algorithm replaces one column of the current basis with a column of the nonbasis and updates the variables

*x*

_{B}and

*x*

_{N}accordingly.

If *x* is a solution to *A·x* = *b* and all the nonbasic variables in
*x*_{N} are equal to either their lower
or upper bounds, *x* is called a *basic
solution*. If, in addition, the basic variables in
*x*_{B} satisfy their lower and upper
bounds, so that *x* is a feasible point, *x*
is called a *basic feasible solution*.

#### Primal and Dual Feasibility

The measure of primal infeasibility is reported as the maximum absolute constraint violation, which is, in terms of the original problem variables,

max(0, lb – x, x – ub,
abs(Aeqx – beq), Ax –
b). | (8) |

To define dual feasibility, begin by partitioning the linear matrix
*A* (from Basic and Nonbasic Variables) into two
blocks: a basis matrix *B* containing linearly independent
columns and a non-basis matrix *N*:

$$\begin{array}{c}Ax=[\begin{array}{cc}B& N\end{array}]\left[\begin{array}{c}{x}_{B}\\ {x}_{N}\end{array}\right]\\ =B{x}_{B}+N{x}_{N}=b.\end{array}$$

A variable *x* for the equation *A x* = *b* is then naturally partitioned into
*x _{B}* and

*x*:

_{N}$$\begin{array}{c}Ax=[\begin{array}{cc}B& N\end{array}]\left[\begin{array}{c}{x}_{B}\\ {x}_{N}\end{array}\right]\\ =B{x}_{B}+N{x}_{N}=b.\end{array}$$

Because B is composed of linearly independent columns, it is invertible, and so

$$\begin{array}{c}{x}_{B}={B}^{-1}\left(b-N{x}_{N}\right)\\ ={B}^{-1}b-{B}^{-1}N{x}_{N}.\end{array}$$

The objective function *f ^{T}x* can
similarly be written

$$\begin{array}{c}{f}^{T}x={f}_{B}^{T}{x}_{B}+{f}_{N}^{T}{x}_{N}\\ ={f}_{B}^{T}\left({B}^{-1}b-{B}^{-1}N{x}_{N}\right)+{f}_{N}^{T}{x}_{N}\\ ={f}_{B}^{T}{B}^{-1}b+\left({f}_{N}^{T}-{f}_{B}^{T}{B}^{-1}N\right){x}_{N}.\end{array}$$

Reduced costs are the adjusted cost coefficients of the non-basic variables.

$${c}_{N}^{T}={f}_{N}^{T}-{f}_{B}^{T}{B}^{-1}N.$$

If for all the non-basic variables that are at their lower bound this reduced
cost value, *c _{j}*(the jth entry in

*c*), is greater than or equal to 0, and all that are at their upper bound

_{N}^{T}*c*is less than or equal to 0, the current solution is dual feasible. The dual simplex algorithm maintains this dual feasibility and tries to make the simplex iterates primal feasible.

_{j}## References

[1] Andersen, E. D., and K. D. Andersen. *Presolving
in linear programming*. Math. Programming 71, 1995, pp.
221–245.

[2] Applegate, D. L., R. E. Bixby, V. Chvátal and W. J.
Cook, *The Traveling Salesman Problem: A Computational Study*,
Princeton University Press, 2007.

[3] Forrest, J. J., and D. Goldfarb. *Steepest-edge
simplex algorithms for linear programming*. Math. Programming 57,
1992, pp. 341–374.

[4] Huangfu, Q. and J. A. J.
Hall. *Parallelizing the dual revised simplex method.* Math.
Prog. Comp. 10, pp. 119–142, 2018. Available at `https://link.springer.com/article/10.1007/s12532-017-0130-5`

.

[5] Gondzio, J. “Multiple centrality corrections in a
primal dual method for linear programming.” *Computational
Optimization and Applications*, Volume 6, Number 2, 1996, pp.
137–156. Available at `https://www.maths.ed.ac.uk/~gondzio/software/correctors.ps`

.

[6] Koberstein, A. *Progress in the dual simplex
algorithm for solving large scale LP problems: techniques for a fast and stable
implementation*. Computational Optim. and Application 41, 2008, pp.
185–204.

[7] Mehrotra, S. “On the Implementation of a
Primal-Dual Interior Point Method.” *SIAM Journal on
Optimization*, Vol. 2, 1992, pp 575–601.

[8] Nocedal, J., and S. J. Wright. *Numerical
Optimization*, Second Edition. Springer Series in Operations
Research, Springer-Verlag, 2006.