Chapter 5 Halting Conditions
We know how to start the simplex method and how to perform the pivot steps. We’ll next analyze the halting conditions for the simplex method.
There are three different places a simplex method can fail:
Phase I returns no feasible solution.
In this case, the linear program is infeasible. This happens precisely when the optimal solution to the auxiliary linear program has \(k_0 \neq 0\).
We cannot find an entering variable.
In this case, there is no “direction” in the feasible region along which the objective value increases i.e. we are at a local maxima. Because the objective function is linear this local maxima is also an absolute maxima (Exercise 5.1) and provides an optimal solution to our linear program. None of the variables enter precisely when none of the \(\bar{c}_i\) are positive.
We cannot find a leaving variable.
This means that we can keep on increasing the entering variable, and hence the objective value, indefinitely without ever leaving the feasible region. None of the variables can leave precisely when none of the \(\bar{a}_{ij}\) are positive where \(\bar{w}_i\) is entering.
Note that these three halting conditions correspond to the three types of standard linear programs as seen in the Fundamental theorem of linear programming (Theorem 2.1).
Halting condition | Linear program type | |
---|---|---|
End of Phase I | No feasible solution: \(k_0 \neq 0\) | Infeasible |
After a pivot step | No entering variable: \(\forall i, \bar{c}_i \le 0\) | Optimal solution |
After a pivot step | No leaving variable: \(\forall j, \bar{a}_{ij} \le 0\) | Unbounded |
Exercise 5.1 (You should only attempt this exercise if you’re comfortable with convex sets.) Let \(S\) be a convex set in \(\mathbb{R}^n\) and let \(f : \mathbb{R}^n \to \mathbb{R}\) be a linear function. Show that if \(p \in S\) is a local maxima of \(f\) over \(S\) then it is also a global maxima. Find a counterexample to this statement if \(S\) is non-convex.
Remark. The notion of “unboundedness” for linear programs is more restrictive than the geometric notion of unboundedness. It is possible to have a linear program whose feasible region is unbounded but the linear program itself is not unbounded, as seen in the following example.
\[\begin{align*} \mbox{maximize: } && -x & \\ \mbox{subject to:} && x &\ge 0. \end{align*}\]
It is not too hard to show that the feasible region of an unbounded linear program is always unbounded. The above example shows that the converse is not always true.
5.1 Degeneracy
Unfortunately, the above results are insufficient to guarantee that the simplex method will always find a solution, if one exists. It is possible for the simplex method to get stuck in a loop. This is called cycling.
Exercise 5.2 Give an example showing that the variable that enters in one pivot step can become leave in the next.
Exercise 5.3 Show that the variable that leaves in one pivot step cannot enter in the next.
We saw in Section 3.1 that after a pivot step the entering and leaving variables get updated as follows: \[\begin{align*} \bar{x}_j & \mapsto \bar{b}_i/\bar{a}_{ij} \\ \bar{w}_i & \mapsto 0. \end{align*}\] This increases the value of the objective function by \(\bar{c}_j \bar{b}_i/\bar{a}_{ij}\). The way the entering and leaving variables are chosen, the constants \(\bar{c}_i\) and \(\bar{a}_{ij}\) are always positive. The constant \(\bar{b}_i\) equals the value of the basic variable \(\bar{w}_i\) and hence must be greater than or equal to 0. This ensures that the objective value never decreases after a simplex step. If \(\bar{b}_i > 0\), the objective value will increase in this pivot step (and never decrease afterwards) and so we will never return back to this BFS.
Proposition 5.1 Suppose \(\bar{w}_i\) is the leaving variable. If \(\bar{b}_i > 0\), the simplex method will never return to this basic feasible solution.
Hence, the only case when cycling can occur is when \(\bar{b}_i = 0\). As mentioned above, \(\bar{b}_i\) is the value of the basic variable \(\bar{w}_i\). Having more than \(n\) variables vanishing at a BFS is precisely the definition of degeneracy (Definition 2.3).
Proposition 5.2 The simplex method can cycle only if there some degenerate BFS.
Note that this is not an “if and only if” statement. Even if there is some degeneracy, there is no certainty that the simplex method will always visit a degenerate vertex twice.
Example 5.1 Consider the following degenerate linear program: \[\begin{equation*} \begin{array}{rrrrrrrrrl} \mbox{maximize:} & x_1 & - & 2x_2 & & & - & 2x_4 \\ \mbox{subject to:} & 0.5 x_1 & - & 3.5x_2 & - & 2x_3 & + & 4 x_4 & \le & 0 \\ & 0.5 x_1 & - & x_2 & - & 0.5 x_3 & + & 0.5 x_4 & \le & 0 \\ & x_1 & & & & & & & \le & 1 \\ & x_1 & , & x_2 & , & x_3 & , & x_4 & \ge & 0 \\ \end{array} \end{equation*}\] The following is a valid sequence of simplex steps:
- \(x_1\) enters and \(w_1\) leaves,
- \(x_2\) enters and \(w_2\) leaves,
- \(x_3\) enters and \(x_1\) leaves,
- \(x_4\) enters and \(x_2\) leaves,
- \(w_1\) enters and \(x_3\) leaves,
- \(w_2\) enters and \(x_4\) leaves.
At the end of the \(6^{th}\) simplex step, we end up looping back to the origin.
5.2 Bland’s Rule
Cycling is only possible in the presence of degeneracy. At degenerate vertices, we have some freedom with choosing the set of basic and non-basic variables. This choice is precisely what allows us to avoid cycling. When we revisit a basic feasible solution that is degenerate, we simply choose a different entering and leaving variable. One can show that this is always possible and leads to an algorithm that always halts.
There is another very efficient way for preventing cycling called Bland’s rule. Bland’s rule says that if there are multiple candidates for the entering/variable then we choose the one with the smallest index. (We assume that the decision variables have a smaller index than the slack variables.) One can show that simplex method always terminates if Bland’s rule is followed.
Theorem 5.1 (Bland's rule) The simplex method always terminates provided that both the entering and the leaving variable are chosen according to Bland’s rule.
Example 5.2 In Example 5.1, the sixth simplex step violates Bland’s rule. Both \(x_1\) and \(x_4\) can be leaving variables and we choose \(x_4\) whereas Bland’s rule requires us to choose \(x_1\).
The proof of Theorem 5.1 is beyond the scope of this class. Using Bland’s rule for both phases of the simplex method, we now have a complete algorithm for solving linear programs. In fact, Theorem 5.1 provides an algorithmic proof of the Fundamental theorem of linear programming (Theorem 2.1). By Theorem 5.1, the simplex method must halt and one of the three halting conditions mentioned at the start of this chapter must occur as there are no other halting conditions. But this is precisely the statement of the Fundamental theorem.