Friday, January 25, 2013

Solution Pool: "All" Is Not All

I was struck by an idle thought (which left a wicked bruise) regarding yesterday's post about finding "all" MIP solutions using the CPLEX solution pool. Branch-and-bound and branch-and-cut algorithms generally prune a node for one of three reasons:
  • the LP relaxation is infeasible, in which case no amount of further torture will cause the node to confess to a solution;
  • the objective value of the LP relaxation is worse (or at least no better) than that of the current incumbent solution, in which case further division of the node will never yield a solution better than the incumbent; or
  • "the" optimal solution to the LP relaxation is integer-feasible, in which case it is unclear how to further partition the node (and whether to bother).
Note the quotation marks around "the" in the third bullet. When searching for all optimal solutions, or even all feasible solutions, the first rule (prune an infeasible node) still makes sense. The second rule needs to be tempered, as there may be solutions in other parts of the tree that are as good as the incumbent even if they are not better (and, in the case of finding all feasible solutions, you do not care that solutions in other branches of the tree have worse objective values). That brings us to the third rule.

A node problem whose LP relaxation has an integer-feasible optimum may in fact have multiple integer-feasible optima, and certainly may have multiple integer-feasible solutions (which is important if you are looking for all feasible solutions, not just all optima). The usual branching rule (pick a variable that should be integer but is not and separate into two nodes by rounding it up or down) is of no avail if the solution is integer-feasible. One could concoct an arbitrary rule for further partitioning, but it would be outside the scope of the normal branch-and-bound algorithm.

[Edit: Ed Klotz pointed out an important distinction to me. When the node relaxation solution is integer-feasible, you need to look at the sequence of branching decisions leading to the node. If those decisions fixed all the integer variables to specific values, then the node LP solution is the only integer-feasible solution at that node, and there's nothing left to do but prune it. If the branching decisions leave open the possibility that some of the integer variables might take on different values consistent with all the added cuts that got you there, then further processing is required, and CPLEX will indeed do this.]

So the idle thought was "what does the CPLEX populate function do in the third case?", and the answer seems to be "prune the node". [Edit: I think I was wrong here. See the update below.] Consider the following trivial test problem:\[ \begin{array}{lrcl} \textrm{maximize} & x_{1}\\ \textrm{s.t.} & \sum_{i=1}^{d}x_{i} & \ge & \frac{d}{3}\\ & x & \in & \mathbb{B}^{d} \end{array} \] where $\mathbb{B}=\{0,1\}$. There is no significance to the choice of 3 as the denominator of the right-side of the constraint, and in fact the constraint is there primarily to make sure that CPLEX extracts and solves for all variables. (Without it, CPLEX treats the problem as having a single variable $x_1$.) The problem clearly has multiple optimal solutions, formed by setting $x_1=1$ and $x_j=1$ for at least $d/3-1$ other values of $j$.

I ran the populate method against this problem, using the same parameter choices mentioned in yesterday's post (and in particular asking for up to 100 optimal solutions). For various (small) choices of $d$, CPLEX produced only a single solution in the pool ($x_j=1$ for all $j$). Turning off the presolver did not affect this, nor did monkeying with the root and node LP algorithms (nor any other parameter change I tried). I'm not sure why that particular solution was the winner. As to why only one solution was returned, my guess is that the first step branches on (or otherwise fixes) $x_1$, rules out the node where $x_1=0$ based on bound, finds an integer-feasible optimum in the node where $x_1=1$ and then fails to partition that node any further. (This is strictly conjecture, since I am not privy to the inner workings of populate.)

My conclusion is that if you really need all the optima for a MIP, you will need to do something clever (either with callbacks or looping through a sequence of problems) and not rely on the CPLEX solution pool.

Update: My conjecture about CPLEX branching once and then not partitioning the winning node was wrong. I noticed that neither the "phase I" nor "phase II" output contained a node log, which would suggest that CPLEX obtained the one solution it found by presolving, even though I had presolving turned off. So I modified the objective function to $x_1 - \epsilon \sum_{j\gt 1}x_j$, which means it is no longer harmless to force all $x_j$ to 1 for $j\gt 1$. With the new objective, CPLEX generated a node log and found all optimal solutions (as well as some suboptimal ones). Similarly, with just one of the variables penalized, say an objective function $x_1 - \epsilon x_2$, CPLEX branched and found all optima.

So failing to partition a node with an integer solution is not the problem. This may just be a bug.

Update #2: It is a bug. Here's an official response from the IBM/ILOG support team.

This is a bug that we will fix. Thomas Opfer's conjecture from Jan 28 is on target:

> Might it be that they iterate trivial examples because it is faster than starting the complete CPLEX-overhead?

Specifically, before CPLEX's main presolve gets started, CPLEX computes a trivial objective upper bound on this maximization problem by removing all the constraints and setting the variables to their appropriate bound based on the objective value. In this case that established an upper bound of 1. CPLEX also applies some very simple heuristics, which find a feasible solution with an objective of 1. So, the lower and upper bound for the MIP already match, which means an optimal solution has already been found. So, no tree, which the solution pool algorithm requires, is available, and CPLEX essentially stops with the one optimal solution in the pool. This is not correct behavior when the solution pool intensity has been set to 4 to tell CPLEX to enumerate all solutions. CPLEX properly handles a similar case when the full presolve solves the whole model and no tree is available to the solution pool, so we need to do likewise in this situation.

We will fix this in either the next fixpack or feature release, whichever comes first. Sorry for the inconvenience, although the good news is that the bug will only bite on very simple models that CPLEX can solve to optimality with some very minimal presolving effort. In the short term, you can work around this on such models by setting the heuristic frequency parameter to -1 to disable heuristics.

Thursday, January 24, 2013

Finding "All" MIP Optima: The CPLEX Solution Pool

Following up on my recent post about finding all optimal solutions (or at least multiple optimal solutions) to MIP models, it turns out that recent versions of CPLEX make this rather easy to program (if not necessarily quick to do) through their solution pool feature. I'll illustrate the technique with a simple example.

The problem

The example problem is to find $N$ binary vectors of dimension $D$ so as to maximize the minimum Hamming distance between them. One source of this problem is coding theory (not to be confused with cryptography): we might want a "vocabulary" of $N$ "words" that are as unlikely as possible to be confused with each other if individual bits are independently molested, with low likelihood, during transmission. The mathematical model is \[ \begin{array}{lrclr} \textrm{maximize} & & d\\ \textrm{s.t.} & \Vert x^{(j)}-x^{(k)}\Vert & \ge & d & \forall j,k\in\left\{ 1,\dots,N\right\} \ni j\lt k\\ & x^{(j)} & \in & \mathbb{B}^{D} & \forall j\in\{1,\dots,N\} \end{array} \]where$\Vert\cdot\Vert$ denotes the Hamming length (number of non-zeros) of a vector and $\mathbb{B}^D$ is the space of $D$-dimensional binary vectors.

The MILP model

To massage this into a mixed-integer linear program (MILP), we introduce the following auxiliary variables:
  • $y^{(j,k)}_i\in\mathbb{B}$ is 1 if and only if vectors $x^{(j)}$ and $x^{(k)}$ differ in component $i$;
  • $d^{(j,k)}$ is the Hamming distance between vectors $x^{(j)}$ and $x^{(k)}$; and
  • $d$ is the minimum Hamming distance between any pair of vectors (to be maximized).
The MILP model is:

\[ \begin{array}{lrclr} \textrm{maximize} & & d\\ \textrm{s.t.} & y_{i}^{(j,k)} & \le & x_{i}^{(j)}+x_{i}^{(k)} & \forall i\in\{1,\dots,D\};\forall j,k\in\left\{ 1,\dots,N\right\} \ni j\lt k\\ & y_{i}^{(j,k)} & \le & 2-x_{i}^{(j)}-x_{i}^{(k)} & \forall i\in\{1,\dots,D\};\forall j,k\in\left\{ 1,\dots,N\right\} \ni j\lt k\\ & d^{(j,k)} & = & \sum_{i=1}^{D}y_{i}^{(j,k)} & \forall j,k\in\left\{ 1,\dots,N\right\} \ni j\lt k\\ & d & \le & d^{(j,k)} & \forall j,k\in\left\{ 1,\dots,N\right\} \ni j\lt k\\ & x^{(j)} & \in & \mathbb{B}^{D} & \forall j\in\{1,\dots,N\}\\ & y^{(j,k)} & \in & \mathbb{B}^{D} & \forall j,k\in\left\{ 1,\dots,N\right\} \ni j\lt k\\ & d^{(j,k)} & \in & [0,D] & \forall j,k\in\left\{ 1,\dots,N\right\} \ni j\lt k\\ & d & \in & [0,D]. \end{array} \]


The model contains a high degree of symmetry, and as I've mentioned before symmetry will slow the search for multiple optima (and, at the same time, can be exploited to easily generate alternate optima from a given solution). I'll add some constraints to the MILP model to get rid of at least some of the symmetry.

First, consider any solution $x^{(1)},\dots,x^{(N)}$ to the model, and suppose I toggle bit $i$ in every vector; that is, define $\hat{x}^{(j)}$ for $j\in\{1,\dots,N\}$ by \[ \hat{x}_{h}^{(j)}=\begin{cases} x_{h}^{(j)} & h\neq i\\ 1-x_{i}^{(j)} & h=i \end{cases}. \] We obtain a new solution with $\Vert\hat{x}^{(j)}-\hat{x}^{(k)}\Vert=\Vert x^{(j)}-x^{(k)}\Vert\:\forall j\lt k$, so the new solution has the same objective value as the old one. Therefore, without loss of generality, I can require that $x^{(1)}=0$.

Second, if we permute the indices of the vectors (other than the first vector, which we have forced to be 0), we obtain another set of $N$ vectors with the same pairwise distances. To eliminate that, we will constrain the vectors to be lexicographically increasing; that is, if $x^{(j)}_h=x^{(j+1)}_h$ for $h\lt i$ and $x^{(j)}_i \neq x^{(j+1)}_i$, then $x^{(j)}_i=0$ and $x^{(j+1)}_i = 1$. Lexicographic ordering constraints can in general be a pain to write, but our auxiliary variables $y$ actually make it simple here:\[ x_{i}^{(j+1)}\ge x_{i}^{(j)}-\sum_{h\lt i}y_{h}^{(j,j+1)}. \]

Setting up the solution pool

Before invoking CPLEX's populate method to fill in the solution pool, we need to set some parameter values. Let's say that our ambition is to find $M$ optimal solutions if possible. (To find all optimal solutions, choose a really large value of $M$ and hope you live long enough to see the run terminate.) The first thing to do is to set the SolutionPoolCapacity parameter to $M$. Next, set the SolutionPoolIntensity parameter to 4, the most aggressive setting ("leave no stone unturned").

The PopulateLim parameter dictates the maximum number of integer-feasible solutions CPLEX will generate before it declares victory and retires from the battlefield. It may stop before reaching this limit because it has exhausted all possible solutions, or because no possible solutions remain that would satisfy the pool gap parameters (discussed below), or because the standard time limit or node limit is reached. The default value for this parameter is 20. You will want it at least $M$, but realistically you need to set it considerably larger than $M$, since suboptimal solutions found along the way count against the limit. Set it to something really, really large to be safe.

We want to screen out suboptimal solutions. Since the objective function is integer-valued, a suboptimal solution will be worse than the incumbent by at least 1. Allowing for some rounding error, we set the pool absolute gap parameter (SolnPoolAGap) to something between 0 and 1 (I'll use 0.5). Any solution with 0.5 of the incumbent will have the same minimum Hamming distance after cleaning up rounding error; any solution more than 0.5 worse than the incumbent has a minimum Hamming distance at least 1 greater and is thus infeasible. (I'm trusting that I won't get a rounding error worse than 0.5 in any objective value.) We will leave the relative pool gap parameter (SolnPoolGap) at its default value of $10^{75}$.

The gap parameter prevents CPLEX from adding solutions to the pool that are inferior to the best solution already in the pool. It does not prevent suboptimal solutions from being added to the pool before the optimum is found, nor does it force them to be dropped from the pool once better solutions are found. The documentation may be a bit unclear on the latter point; in the sections on the gap parameters, it says that inferior solutions will be discarded but does not explicitly limit that statement to new solutions. I've verified by experiment, though, that inferior solutions found prior to the first truly optimal incumbent may be retained.

The (partial) cure for this is to adjust the pool replacement strategy (SolnPoolReplace). This parameter tells CPLEX how to make room when the pool is full and a new integer-feasible solution is found. Note that, by virtue of our choice of SolnPoolAGap value, the newly found solution will be at least as good as the best of the solutions in the pool. The default value (CPX_SOLNPOOL_FIFO = 0) uses a first in, first out replacement strategy, without regard to objective value. What we want is CPX_SOLNPOOL_OBJ = 1, which kicks out the solution with the worst objective value to make room for the new and improved solution. (If you want to find lots of feasible but not necessarily optimal solutions, you might want CPX_SOLNPOOL_DIV = 2, which kicks out solutions so as to enhance the diversity of the pool.)

Are we done yet???

Now call the populate method (whether in an API or the interactive optimizer), sit back, and wait for the results. When CPLEX finishes, the pool is full of optimal solutions, right? Not quite. If there are fewer than $M$ optimal solutions to your model, chances are quite high that some suboptimal solutions will have made it into the pool. Since the pool never filled up, no solutions were kicked out, so they'll still be sitting there. You will need to check the objective values of each solution in the pool, find the best of those values, and use it to weed out the suboptimal solutions.

You are welcome to see my Java source code for this problem. I wrote it for Java 7, but I believe it will work with Java 6 if you edit one line.

Friday, January 18, 2013

Symmetry and The Quest for All Solutions

This is an addendum to Wednesday's post on finding all solutions to linear and (mixed) integer linear programs.

In the context of integer programming models, symmetry refers to situations in which some nontrivial permutation of the indices of variables (and perhaps constraints)  yields the original problem again. The implication is that, given a feasible solution to the problem, applying an appropriate permutation to the variables results in another, nominally distinct, feasible solution with identical objective value. I don't intend to go into symmetry in depth here, but in general terms the implication is that, left untreated, symmetry can slow the progress of branch-and-bound or branch-and-cut algorithms. The reason is that when incumbent solutions are found, the existence of their symmetric counterparts in other portions of the tree (with identical objective value) inhibits pruning of nodes based on bounds.

[Edit: I've added a fourth approach. Hat tip to Ed Klotz for reminding me of it in a comment.]

There are three four fundamental approaches to dealing with symmetry:
  • eliminate it in the model by adding constraints;
  • modify the algorithm to take it into account;
  • do nothing and live with the slower convergence of the algorithm; or
  • find an alternative model with no (or less) symmetry.
The purpose of this post is to point out that, when looking for all or even several optimal solutions to integer problems, the last option is a poor choice.

A quick example

Packing and covering problems are notorious for symmetry, so I'll use a packing problem as an example. Suppose that I am moving my household, using a moving company. The moving company provides boxes, which I will charitably assume are identical in size and shape. They also charge for the boxes, so I wish to use the fewest boxes possible. Being an OR person, I create and solve a bin packing model. The optimal solution puts my gym clothes and my paperback books in box #3 and my fine china and my computer in box #7. (I never said it was a good model.)

Now suppose I (carefully) dump out the contents of boxes 3 and 7 and swap them, packing the china and computer in box #3 and gym clothes and paperbacks in box #7. The boxes are identical, so everything fits. I have not changed the number of boxes being used (the objective function), so this second solution is again optimal.

But is it really a "second solution"? The numbering of the boxes means nothing to me. When I get to my destination, the one making all the clattering sounds will have the computer and what's left of the china, and the one smelling a bit ripe will have the paperback books and the gym clothes (did I mention I forgot to wash them?), regardless of the numbers printed on them.

The reformulation option


As Ed Klotz notes in his comment, sometimes a different formulation eliminates the symmetry, possibly (but not necessarily) at the expense of adding variables. For example, I recently was playing with a vehicle routing problem (VRP) in which the vehicles are all identical. The VRP requires assigning customers to vehicles and then organizing each vehicles route (essential a traveling salesman problem for each vehicle). A straight-up formulation might look like the following (in which $d_{ij}$ is the distance or travel cost between customers $i$ and $j$, with $i=0$ for the depot, $c_i$ is the load required by customer $i\in\{1,\dots,N\}$, and $C$ is the capacity of each of the $V$ vehicles:
\[ \begin{array}[t]{lrclr} \mathrm{minimize} & & \sum_{v=1}^{V} \sum_{i=0}^{N}\sum_{j=0}^{N} d_{ij}x_{ijv}\\ \mathrm{s.t.} & \sum_{v=1}^{N}y_{iv} & = & 1 & \forall i>0\\ & \sum_{i=0}^{N}x_{ijv} & = & y_{jv} & \forall j>0,\forall v\\ & \sum_{i=0}^{N}x_{jiv} & = & y_{jv} & \forall j>0,\forall v\\ & \sum_{j=1}^{N}c_{j}y_{jv} & \le & C & \forall v\\ & x_{ijv},y_{iv} & \in & \left\{ 0,1\right\} & \forall i,j>0,\forall v\\ & ? & ? & ? \end{array} \]Binary variables $x_{ijv}$ signal whether stop $j$ (a customer or, if $j=0$, the depot) immediately follows stop $i$ on the route for vehicle $v$, while the binary variables $y_{iv}$ signal whether customer $i$ is assigned to vehicle $v$. The first constraint requires that every customer be assigned to exactly one vehicle. The second and third say that, on any given route, a particular customer is entered and exited exactly once if on that route and zero times if not. The fourth constraint asserts the capacity limit of the vehicles. The final set of constraints (which I deliberately omitted -- the question marks) would be subtour elimination constraints, which can be handled in a variety of ways.

The fact that all vehicles are identical introduces one level of symmetry: if I take a feasible (optimal) solution and rotate the routes among the vehicles, I get an equivalent feasible (optimal) solution. Another source of symmetry is that, within any single route, if I traverse the route in reverse order I get another solution with an identical objective value.

The indistinguishable nature of the vehicles, though, allows us to rethink the formulation in a way that both eliminates the first level of symmetry and also reduces the size of the model. Suppose that, instead of having $V$ vehicles, I had one vehicle that would be making $V$ runs, returning to the "barn" each time. The total number of stops in a day would be $N+V+1$, consisting of starting the day at the depot (1), stopping at each customer ($N$), and returning to the depot $V$ times. So we can think of the entire day's schedule as a single string of integers $0,\dots,N$ with $V+1$ zeros and one instance of each other integer.

With that picture in mind, let $x_{ij}$ be 1 if stop $j$ immediately follows stop $i$ in the master schedule for the day, and let $u_i$ be the load of the vehicle when arriving at stop $i\ge 0$. Assume that $c_0=0$ (the depot has no demand). We arrive at the following alternative model:
\[ \begin{array}[t]{lrclr} \mathrm{minimize} & & \sum_{i=0}^{N}\sum_{j=0}^{N} & d_{ij}x_{ij}\\ \mathrm{s.t.} & \sum_{i=0}^{N}x_{ij} & = & \begin{cases} 1 & j>0\\ V & j=0 \end{cases} & \forall j\ge0\\ & \sum_{i=0}^{N}x_{ji} & = & \begin{cases} 1 & j>0\\ V & j=0 \end{cases} & \forall j\ge0\\ & u_{j}-u_{i} & \le & C-\left(C+c_{i}\right)x_{ij} & \forall i>0,\forall j\ge0\\ & x_{ij} & \in & \left\{ 0,1\right\} & \forall i,j\ge0\\ & u_{j} & \in & [0,C] & \forall j\ge0 \end{array} \] The first two constraints say that every customer is entered and exited once, while the depot is entered and exited exactly once per route. (If the depot occurs in two consecutive stops, it means we have more vehicles than we need.) The third constraint says that the load in the vehicle when entering stop $j$ is no greater than the load entering the previous stop minus the demand at that stop, with the important exception that at the first stop after a trip to the depot, the limit is not enforced (so the maximum load entering the first post-depot stop is $C$, the vehicle capacity).

The model still contains the second source of symmetry -- if we reverse the stops between any two consecutive visits to the depot, the travel distance is unchanged -- but we have eliminated the first source of symmetry. Once we have a solution, we can use a random number generator (or take bribes from the drivers) to assign routes to vehicles.

The reason to find "all" solutions

As I mentioned in the previous post, possible motivations for finding all (or at least multiple) optimal solutions include providing decision makers with choices and dealing with secondary criteria omitted from the model. In either of those cases, I'm fairly certain that providing solutions that are permutations of the original solutions will be unsatisfactory. To quote Spock (in the novel "Spock Must Die!"), "a difference which makes no difference is no difference". So in those cases it behooves you to avoid solutions that are permutations of other solutions. If you take the third approach ("do nothing"), not only will solution time increase, but you will need to screen solutions to detect which are permutations.

If permutations will do ...

On the other hand, if permuted solutions satisfy your lust for multiple solutions, I would still advocate either constraining the symmetry away or modifying the algorithm to avoid (or exploit) it where possible. Doing either of those requires that you recognize the symmetry (which is often something as simple as permuting labels of things). Once you find a proven optimal solution, it is quick and easy to manually apply the permutation(s) to find other "distinct" optima.

Wednesday, January 16, 2013

Finding All Solutions (or Not)

People occasionally ask how to find all optimal solutions (or, less frequently, all feasible solutions) to a mathematical programming problem. To keep the length of this post reasonable (?), I'll restrict myself to linear and (mixed) integer linear programs, and make only some general observations. Specific methods will be left to future posts.

Let's consider a linear optimization problem of the form \[ \begin{array}{lrcl} \textrm{minimize} & & f(x,y)\\ \textrm{s.t.} & (x,y) & \in & \mathcal{P}\\ & x & \in & \mathbb{Z}^{m}\\ & y & \in & \mathbb{R}^{n} \end{array} \] where (without loss of generality) we are minimizing a linear function \(f\) over a convex polytope \(\mathcal{P}\). For a linear program (LP), omit the integer variables $x$; for a pure integer linear program (ILP), omit the continuous variables $y$. Again without loss of generality, we assume that the feasible region $\tilde{\mathcal{P}}=\mathcal{P}\cap\left(\mathbb{Z}^{m}\times\mathbb{R}^{n}\right)$ is not empty (else there is nothing to find) and that $\mathcal{P}$ (and therefore $\tilde{\mathcal{P}}$) is bounded. In any realistic situation, $\mathcal{P}$ can be bounded by adding "box constraints" (bounds) for $x$ and $y$. Unrealistic situations are left to the reader as an exercise. Since the continuous hull of $\mathcal{P}$ (relaxing the integrality constraints) is closed (courtesy of convexity) and bounded (by assumption), and $f$ is continuous (courtesy of linearity), and since the feasible region $\tilde{\mathcal{P}}$ is nonempty (by assumption), at least one optimal solution must exist.

The two questions are equivalent. Suppose that we are looking for all feasible solutions (i.e., the contents of $\tilde{\mathcal{P}}$). That's the same as finding all optimal solutions with a constant-valued objective function $f(x,y)=c$. On the other hand, finding all optimal solutions to the problem above is equivalent to finding all points in the modified feasible region \[ \left\{ x\in\mathbb{Z}^{m},y\in\mathbb{R}^{n}:(x,y)\in\mathcal{P}^{*}\right\} \] where $\mathcal{P}^{*}=\mathcal{P}\cap\left\{ (x,y):f(x,y)\le f^{*}\right\} $ and $f^{*}$ is the optimal objective value in the original problem. Since we obtained $\mathcal{P}^{*}$, the set of optimal solutions, by intersecting $\mathcal{P}$ with a half-space, $\mathcal{P}^{*}$ is another convex polytope.

Henceforth I'll restrict myself to the question of finding all optimal solutions.

Uncountably infinite is a lot. Start with a linear program (no $x$). If $\hat{y}$ and $\tilde{y}$ are two optimal solutions with objective value $f^{*}$, then any convex combination $\lambda\hat{y}+(1-\lambda)\tilde{y}$ with $0\le\lambda\le1$ belongs to $\mathcal{P}$ (by convexity) and has the same objective value $f^{*}$ (by linearity of $f$). So right away we have an uncountable number of optima. With a pure integer program (no $y$), the situation is rosier: since we have assumed that $\mathcal{P}$ is bounded, the feasible region $\tilde{\mathcal{P}}$ will contain a finite number of integer lattice points, and so the number of optimal solutions will be finite. (Note, however, that "finite" does not preclude "annoyingly large".)

With a mixed integer linear program (MILP), things are trifle more complex. The projection of the set of optimal solutions onto $\mathbb{Z}^{m}$, i.e., the set of $x$ for which a $y$ exists such that $(x,y)$ is optimal, will be finite. If for each such $x$ there is exactly one $y$ yielding an optimal solution, the number of optima is finite. As soon as we find one $x$ such that $(x,y)$ and $(x,y')$ are both optimal with $y\neq y'$, we're back to having an uncountable number of optima.

Putting all that together, for ILPs we can ask to find all optima. For LPs, the best we can do is ask to find all extreme points (vertices) of $\mathcal{P}$ that are optimal. For MILPs, we can ask for all $x^{*}$ that correspond to optimal solutions, and for each such $x^{*}$ we can ask for a single $y^{*}$, or at most all extreme points of $\left\{ y\in\mathbb{R}^{n}:\left(x^{*},y^{*}\right)\in\mathcal{P}^{*}\right\} $ (yet another polytope).

This could take a while. Let's consider the LP case again (no $x$). Per the Wikipedia entry, the Avis-Fukuda algorithm for enumerating the vertices of a polytope $\mathcal{P}$ expressed as the solution to a set of linear inequalities in $y$ has time complexity of $O(knv)$, where $k$ is the number of nonredundant inequalities defining $\mathcal{P}$, $n$ is the dimension of $y$ and $v$ is the number of vertices to be enumerated. (Sorry, my notation is slightly different from the Wikipedia's. Deal with it.) This does not look too bad ... until you stop to think about $v$. If the polynomial is a hypercube, $k=2n$ and $v=2^{n}$, so we're looking at $O\left(n^{2}2^{n}\right)$ time. That's fine for small $n$, but it blows up fairly quickly as $n$ increases.

What about the integer case? Well, ILPs and MILPs are NP-hard in general,
and that's just to find one optimal solution. Finding all optimal
solutions would be, what, NP-NP-hard?

Is this really what you want? This brings me around to motivation. Why do you want all optimal (let alone all feasible) solutions? One possible answer is to offer a decision maker some choices, charitably assuming that more than one optimum exists. (Here is a related post.) Another is that there are criteria not included in the model that distinguish the "quality" of various solutions (so not all optimal solutions are equally good, even though they are all equally optimal).

My suggestion in both those cases is to find one optimal solution $\left(x^{*},y^{*}\right)$ with objective value $f^{*}$, then find "epsilon-optimal" solutions satisfying some diversity criterion. We pick our favorite $\epsilon>0$, add the constraint $f(x,y)\le f^{*}+\epsilon$, and change the objective to something else. Possibilities include (but are by no means limited to) maximizing the $L_{1}$ distance of the next solution from $\left(x^{*},y^{*}\right)$ or from the centroid of previously found solutions, maximizing the minimum distance from all previously found solutions, maximizing or minimizing one particular variable (e.g., $\max x_{i}$ or $\min y_{i}$), or optimizing a randomly generated linear function of $x$ and $y$.

If you have some alternate criterion, not part of the original model, that you were going to use to pick among competing optima, and if that criterion is linear (or can be approximated by a linear function), it makes an obvious choice for the second stage objective.

Sunday, January 13, 2013

Java Vulnerability

The U.S. Computer Emergency Readiness Team (US-CERT) recently issued an advisory bulletin regarding a serious security flaw in the Oracle Java Runtime Environment (JRE). I've read news articles about it in several places (here is one), and the comments sections universally show rampant confusion (and the inevitable flaming of the confused). A few key points:
  • JavaScript is not Java. In fact, they are essentially unrelated. There is no reason (or at least no reason related to the security bulletin) to disable JavaScript in your browser, and doing so will cause some web sites to become unusable.
  • It's a browser issue. The CERT bulletin recommends "... that Java be disabled temporarily in web browsers ..." (emphasis added) and provides a link to instructions from Oracle on how to do so. (Better instructions, IMHO, are here, courtesy of a link in the ZDNet article.) They are not recommending that Java be uninstalled. Uninstalling Java is liable to cause various programs (such as the OpenOffice and LibreOffice suites) to become inoperable. You just need to disable the Java browser plugin, either globally through the Java control panel or locally in each browser's plugin control page.
  • It may well be limited to Oracle Java. I have no idea whether the flaw exists in the OpenJDK runtime environment and the IcedTea plugin. My feeling is that very few websites require a Java browser plugin, so I'm inclined to disable IcedTea on my system just to be safe.
  • Help is coming. According to an article today on the PCWorld website, Oracle has a fix coming within a matter of days. So keep an eye out for notification of a Java update, and install it when it becomes available.

Thursday, January 10, 2013

Selecting the Default Application for a File Type

In Linux Mint (and Ubuntu, other Linux distributions and various competing operating systems), it is frequently possible to open a data file in an appropriate program by double-clicking the icon for the file, either on your desktop or in a file browser. This is handled by associating the file's extension (which Linux treats as a MIME type) with the corresponding program. In Linux, the association is done by the file manager. The distributions I've used in recent years (Ubuntu and Mint) have been based on the GNOME desktop, which in turn uses the Nautilus file manager.

Many programs set up file associations at the time you install them. For example, the LibreOffice productivity suite associates itself with OpenDocument file extensions (.odt etc.). Many programs, however, do not do that automatically; in some cases, it is because they claim no specific file extensions. In those cases, Nautilus offers the option to right-click a file of the desired type (extension), click Properties > Open With, select the application (clicking Show other applications if necessary to find it), and Add the application to the list of default choices for that file type.

This works fine as long as Nautilus lists your application of choice. I ran into a problem with that today. I was trying to make SQLite Studio the default application to open files with the extension .sqlite. (Incidentally, if you are looking for a program that will let you create, browse and modify SQLite databases, I definitely recommend SQLite Studio. It's available on Linux, Mac OSX, Windows and a few other operating systems.) Unfortunately, it did not appear on the application list when I clicked Show other applications. This appears to be the result of a bug in current versions of Nautilus.

Two things are necessary to have your application display in the list. First, there needs to be a .desktop file for it, typically residing either in /usr/share/applications or in ~/.local/share/applications. In my case, the .desktop file already existed: I had added SQLite Studio to the start menu, which automatically creates a .desktop file in the latter directory. So why was it not appearing on the list??

After considerable searching, I found the answer here: the command line must end with "%u" (which is the placeholder into which Nautilus will insert the name and path of the file to be opened). I suppose this makes sense in hindsight; without that placeholder, the user might be selecting an application that does not accept a file to open as a command line argument. Still, I would have thought Nautilus would just append the file name to the command line, with or without the placeholder.

You can add the placeholder either by editing the .desktop file directly, in a text editor (append it to the "Exec=" line) or by right-clicking the start menu, selecting Edit Menu, finding the application, clicking Properties and editing the Command field. Once you have a .desktop file containing the "%u" placeholder, the application should appear on the Nautilus list of available applications.

Tuesday, January 1, 2013

2013 Off to a Shaky Start

Here's a bit of irony for you: I had just finished reading a blog post in which the author pooh-poohed the notion that the year 2013 would be unlucky due to the last two digits when my system crashed ... repeatedly. Maybe the triskaidekaphobics are onto something (or maybe the Mayans were just off by a week and change).

When I first installed Linux Mint 14 (Cinnamon), upgrading from Mint 11, it appeared to me that my display seemed a bit dimmer than it had been. (My monitor is a Samsung SyncMaster 932GW; the video card is an NVIDIA GeForce 6150SE nForce 430.) My eyes adjusted to the difference, which I suspect was caused by the open-source Nouveau driver that ships with Mint. Until today, that's been the only issue with the Nouveau driver.

After reading the aforementioned blog post about 2013, I moved on to another post on the same blog, one containing a modest amount of graphics and nothing that, to my eye, would require any sort of hardware acceleration. As I scrolled down the post, my system suddenly did a spontaneous reboot. I think it was the X system rather than Linux itself, but I'm no expert on these things. The symptoms were a crash to a black screen, then the Mint log-in screen.

So I logged in again, and as soon as the password was submitted, I got a black screen (which is actually normal between log-in and desktop) and then the log-in screen again. Uh-oh! After logging in (again), the cycle repeated, only this time what I assume was the log-in screen was unreadable (the sort of colored "sleet" you get when there is a mismatch between monitor and display card with regard to scan frequency). I couldn't restart the X display with ctrl-alt-F1, so I powered down the computer the old fashioned way (the power switch), started it up again, and managed to log in successfully.

Poking around in the syslog file with the system log viewer (Menu > System Tools > System Log),  I found the following at what I think was the point of the initial crash:

Jan  1 17:49:20 HomePC kernel: [15178.991600] [drm] nouveau 0000:00:0d.0: Failed to idle channel 1.
Jan  1 17:49:20 HomePC gnome-session[1620]: Gdk-WARNING: gnome-session: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.#012
Jan  1 17:49:20 HomePC mdm[1193]: WARNING: mdm_slave_xioerror_handler: Fatal X error - Restarting :0

Note the reference to nouveau having a problem, followed by a fatal X error. The next chunk of the log shows the same pattern as my two log-in attempts lead to more crashes:

Jan  1 17:49:21 HomePC acpid: 1 client rule loaded
Jan  1 17:49:35 HomePC kernel: [15193.814381] [TTM] Failed to expire sync object before buffer eviction
Jan  1 17:49:35 HomePC kernel: [15193.814443] [TTM] Failed to expire sync object before buffer eviction
Jan  1 17:49:35 HomePC kernel: [15193.817930] [TTM] Failed to expire sync object before buffer eviction
Jan  1 17:49:51 HomePC kernel: [15209.778576] [drm] nouveau 0000:00:0d.0: Failed to idle channel 1.
Jan  1 17:49:51 HomePC kernel: [15209.782509] [drm] nouveau 0000:00:0d.0: Setting dpms mode 3 on vga encoder (output 0)
Jan  1 17:49:51 HomePC gnome-session[5894]: Gdk-WARNING: gnome-session: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.#012
Jan  1 17:49:51 HomePC mdm[5858]: WARNING: mdm_slave_xioerror_handler: Fatal X error - Restarting :0
Jan  1 17:49:51 HomePC pulseaudio[6317]: [pulseaudio] client-conf-x11.c: xcb_connection_has_error() returned true
Jan  1 17:49:51 HomePC pulseaudio[6322]: [pulseaudio] client-conf-x11.c: xcb_connection_has_error() returned true
Jan  1 17:49:51 HomePC pulseaudio[6325]: [pulseaudio] pid.c: Stale PID file, overwriting.
Jan  1 17:49:51 HomePC pulseaudio[6325]: [pulseaudio] bluetooth-util.c: org.bluez.Manager.ListAdapters() failed: org.freedesktop.DBus.Error.AccessDenied: Rejected send message, 2 matched rules; type="method_call", sender=":1.111" (uid=1000 pid=6325 comm="/usr/bin/pulseaudio --start --log-target=syslog ") interface="org.bluez.Manager" member="ListAdapters" error name="(unset)" requested_reply="0" destination="org.bluez" (uid=0 pid=853 comm="/usr/sbin/bluetoothd ")
Jan  1 17:49:51 HomePC pulseaudio[6325]: [pulseaudio] server-lookup.c: Unable to contact D-Bus: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket /tmp/dbus-z2bkq6PwPB: Connection refused
Jan  1 17:49:51 HomePC pulseaudio[6325]: [pulseaudio] main.c: Unable to contact D-Bus: org.freedesktop.DBus.Error.NoServer: Failed to connect to socket /tmp/dbus-z2bkq6PwPB: Connection refused
Jan  1 17:49:51 HomePC pulseaudio[6330]: [pulseaudio] pid.c: Daemon already running.
Jan  1 17:49:54 HomePC acpid: client 5871[0:0] has disconnected
Jan  1 17:49:54 HomePC acpid: client connected from 6332[0:0]
Jan  1 17:49:54 HomePC acpid: 1 client rule loaded
Jan  1 17:49:54 HomePC kernel: [15213.146184] [drm] nouveau 0000:00:0d.0: Failed to idle channel 4.
Jan  1 17:49:54 HomePC kernel: [15213.173003] [drm] nouveau 0000:00:0d.0: Setting dpms mode 0 on vga encoder (output 0)
Jan  1 17:49:54 HomePC kernel: [15213.173011] [drm] nouveau 0000:00:0d.0: Output VGA-1 is running on CRTC 0 using output A
Jan  1 17:50:18 HomePC kernel: [15237.313716] [drm] nouveau 0000:00:0d.0: Failed to idle channel 1.
Jan  1 17:50:18 HomePC mdm[6318]: WARNING: mdm_slave_xioerror_handler: Fatal X error - Restarting :0
Jan  1 17:50:21 HomePC kernel: [15240.366064] [drm] nouveau 0000:00:0d.0: Failed to idle channel 4.
Jan  1 17:50:21 HomePC acpid: client 6332[0:0] has disconnected
Jan  1 17:50:21 HomePC acpid: client connected from 6745[0:0]
Jan  1 17:50:21 HomePC acpid: 1 client rule loaded
Jan  1 17:50:28 HomePC kernel: [15247.081273] [drm] nouveau 0000:00:0d.0: Failed to idle channel 3.
Jan  1 17:50:41 HomePC kernel: Kernel logging (proc) stopped.

So I decided to switch to the proprietary NVIDIA driver. The process is easy, although the download is a bit time consuming: Menu > Preferences > Software Sources, switch to the last tab (Additional Drivers), select the NVIDIA binary driver (proprietary and tested version in my case). The system did not demand a reboot after installation was complete, but inxi -G produced output that did not specify the new driver, so I rebooted just to be safe. After the reboot, inxi -G reports "FAILED: nvidia", which is apparently a bug (it reported "FAILED: nouveau" before the driver change) but shows "GLX Version: 2.1.2 NVIDIA 304.43", indicating I've successfully switched to the NVIDIA driver.

We'll see if my decision to switch drivers proves rash. The Nouveau driver worked fine for months, and it's possible some other thing will blow up the NVIDIA driver.