Saturday, June 25, 2016

Enforcing Simultaneous Arrivals

I'm recapping here an answer to a modeling question that I just posted on a help forum. (Since it will now appear in two different places, let's hope it's correct!)

The original poster (OP) was working on a routing model, in which vehicles (for which I will use $v$ and if needed $v'$ as indices) are assigned to visit customers (index $c$) at various (strictly positive) times. Different customers need different numbers of vehicles. We'll assume, for simplicity, that each customer is visited only once (or at most once, if you're allowed to blow off customers). If customers can be revisited, we can finesse it by creating clones of the customers (so $c$ and $c'$ are actually the same customer at different times).

I'll skip much of the formulation, which is irrelevant to the issue here. The problem confounding the OP was that if multiple vehicles serve the same customer, they must arrive at the same time. I suggested two ways to formulate this, described below.

Discrete time approach

Let's assume that the time frame (call it 0 to $T$) can be adequately approximated by a discrete sequence $$0=t_0 \lt t_1 \lt \dots \lt t_N=T.$$ Define a set of binary variables $x_{cvn}$ to be 1 if vehicle $v$ serves customer $c$ at time $t_n$ and 0 otherwise. To enforce the simultaneity requirement, we add the constraints $$x_{cvn} = x_{cv'n}$$for all combinations of $c$, $n$ and $v\neq v'$. The good news is that it's relatively easy to do and numerically stable. The bad news is that it requires a ton of binary variables ($C\times V\times N$, where $C$ is the number of  customers, $V$ the number of vehicles and $N$, as noted, the number of time epochs) and constraints ($C\times N \times \frac{V\times (V-1)}{2}$). The constraints could hypothetically be added "lazily" via callbacks, but I'm not sure that whether that would help or hurt performance.

Big $M$ approach

Let's stipulate, for convenience, that a vehicle is assigned arrival time 0 at a customer if the vehicle does not serve that customer. The alternative approach relies on the fact that the absolute difference between any two arrival times cannot exceed $T$. We can use a "big $M$" model with $T$ serving as our value for the infamous coefficient $M$.

Let $y_{cv}$ denote the time at which vehicle $v$ arrives at customer $c$ (or 0 if the assignment of $v$ to $c$ is not made). The $y$ variables can be considered continuous, and we do not need to discretize time. We will also need binary variables $z_{cv}$ taking the value 1 if vehicle $v$ services customer $c$ and 0 if not. We add the constraints $$y_{cv}-y_{cv'} \le T(2 - z_{cv} - z_{cv'})$$and $$y_{cv'}-y_{cv} \le T(2 - z_{cv} - z_{cv'})$$for all combinations of $c$ and $v\neq v'$. We'll also need constraints to tie $y$ and $z$ together, namely $$y_{cv}\le Tz_{cv}$$for all combinations of $c$ and $v$. If serving a customer at time 0 is not legal, and $\tau \gt 0$ is the earliest possible service time, throw in$$y_{cv}\ge \tau z_{cv}$$for all $c, v$ pairs. Using the previous notation, this requires $C\times V$ binary variables (the continuous variables are harmless) and at worst $C\times V\times (V-1)+2\times C\times V$ constraints, which is considerably more compact than the previous approach. Two possible downsides -- the only ones I see off the top of my head -- are that if $T$ is large, it may create numerical stability issues and/or cause LP relaxations to be loose (for both of which "big $M$" models are infamous).

Saturday, June 18, 2016

MathJax Whiplash

Technology is continuously evolving, and for the most part that's good. Every now and then, though, the evolution starts to look like a random mutation ... the kind that results in an apocalyptic virus, or mutants with superpowers, or something else that is much more appealing as a plot device in a movie or TV show than in real life.

Back in 2010 I switched math rendering in this blog to MathJax, and things went swimmingly ... for a while. Then rendering got glitchy in Firefox (I forget exactly when), and I ended up installing a trio of Firefox extensions (MathML-Fonts, MathML Font Settings, and Native MathML), which fixed things ... for a while.

Last December things got goofy again. Math formulas were causing excess vertical space to appear between lines. Google Chrome was unaffected; I only saw this in Firefox. (I never use Internet Exploder, Safari or Opera, and don't use Chromium very often, so I have no idea if any of them were affected.) As I noted in an otherwise unrelated post, the problem seemed to cure itself while I was busy searching for help, so I never knew what caused it or, for that matter, what cured it. No matter; rendering once again worked properly ... for a while.

A week or so ago, I noticed that the vertical spacing issue had returned. Today I finally took the time to go looking for help. The closest I came to a fix was finding information in a MathJax FAQ that the configuration I was using in the blog had been deprecated as of MathJax v2.6. In particular, I had been using a configuration that defaulted to native MathML where possible, given that Firefox supports MathML. One sentence in the FAQ caught my eye:
While MathJax’s NativeMML output processor works around various limitations of Firefox/Gecko and Safari/WebKit, the results using the NativeMML output processor may have spacing or other rendering problems that are outside of MathJax’s control.
So I changed the blog configuration to the option now "preferred" by MathJax ("CommonHTML"), but that did not help. Screwing around with the font choices provided by the MathML Font Settings extension, I found exactly one font that displayed correctly. (I forget now which it was, but it was not what I would call an obvious choice.)

I'm not sure what inspired me to try it, but I disabled all three extensions on my desktop machine ... and, lo and behold, the problem disappeared. Apparently some change either in MathJax (circa version 2.6?) or Firefox made the font overrides done by the extensions an enemy rather than an ally. I confirmed that disabling the extensions on my laptop also fixed the problem there.

So rendering of math in Firefox is once again working properly ... for a while. But I'm leaving those extensions installed, just in case.

Wednesday, June 1, 2016

Using CLP with Java

The COIN-OR project provides a home to a number of open source software projects useful in operations research, primarily optimization programs and libraries. Possibly the most "senior" of these projects is CLP, a single-threaded linear program solver. Quoting the project description:
CLP is a high quality open-source LP solver. Its main strengths are its Dual and Primal Simplex algorithms. It also has a barrier algorithm for Linear and Quadratic objectives. There are limited facilities for Nonlinear and Quadratic objectives using the Simplex algorithm. It is available as a library and as a standalone solver. It was written by John Forrest, jjforre at
Like most of the projects at COIN-OR, CLP is coded in C++, which can make it a bit of a pain to use from Java. So I was quite interested when Nils Löhndorf wrote to me that he had written a Java interface to CLP, clp-java, which he has released as open source. I have an academic license for CPLEX, so I'm not really motivated to switch to CLP, but in the past I've found myself working on open source projects in which I would like to have embedded an LP solver. Since the intent is that people be able to use the program at no cost, embedding a somewhat expensive commercial solver is clearly a non-starter.

So I downloaded clp-java and took it for a test drive. I coded a basic assignment model in Java, with randomly generated assignment costs and a user-selectable dimension. (By "dimension" I mean number of slots and number of assignees.) Each run solves a single assignment problem twice, once with CLP and once with CPLEX. Since CLP is limited to a single thread, I throttled CPLEX to a single thread as well.

My primary objective was not to see which was faster. Hans Mittelmann at Arizona State University maintains solver benchmarks in his Decision Tree for Optimization Software. You can see there that CLP sometimes beats CPLEX (rather handily) but frequently is slower. (The same can be said for CLP in comparison to Gurobi.) What I wanted to know was the following:
  • is clp-java easy to use (yes!) and well-documented (pretty well);
  • does it produce correct answers (yes, at least on my tests -- CLP and CPLEX obtained identical solutions on all tests);
  • is there any performance penalty for using the clp-java interface (not that I can see).
I started with a dimension of 5 and doubled it with each new run, up to 1280 for the final run. Here's a log-log plot of total end-to-end solution times:
plot of total solution times
CPLEX beat CLP consistently, but as problem dimension grew the ratio of times seemed fairly steady, and the difference (a bit over 25 seconds on my quad-core PC) doesn't strike me as horrible. IMPORTANT DISCLAIMER: We're looking at one replication at each problem size of one particular LP (assignment problem). Ascribe any significance to anything I say at your own peril. Also, I repeated a few replications, and the times varied significantly, even though the seed for the random number generator was held constant (meaning the repeated problems should have been identical to the original versions). So the teeny sample size is exacerbated by fairly high variance.

I broke the timing down into three parts: setting up the model; solving the model; and recovering the solution (getting the solver to cough up the values of the assignment variables). Here are log-log plots for each.


log-log plot of setup times
CPLEX seemed to be a bit faster creating a model object than CLP was, but the difference was pretty minimal (2.3 seconds with 1,280 assignees).


log-log plot of solution times
The bulk of the time for either program is spent solving the problem, so this plot resembles the plot of overall execution times.


log-log plot of time to recover the solution
For the small models, both program returned the variables values so quickly that the recorded time (in milliseconds) was zero. Interestingly (at least to me), on the larger instances CLP returned solution values faster than did CPLEX.

Again, the takeaway from the experiments is not which solver is better; it's that Nils's interface works correctly and does not seem to introduce any glaring inefficiencies. Overall, I am quite impressed with Nils's creation. He provides an all-in-one jar file that contains not only his code but also CLP and some third-party libraries. You only have to import clp-java into your project and put the jar in the class path. When you run your program, the necessary libraries are unpacked into a temporary directory. Before the program exits, it cleans up after itself, deleting the temporary directory. (I confirmed that it left no digital artifacts behind.)

There is one bug yet to be worked out (as of this writing). At least on Ubuntu 14.04 and Linux Mint 17.2 (based on Ubuntu 14.04), "out of the box" you get linking errors at run time. Apparently, despite the fact that the COIN libraries CLP uses are sitting in the same temporary directory that CLP is sitting in, it looks for them elsewhere and can't find them. The workaround is to install the coinor-clp package (and a couple of dependencies) from the Canonical repositories, using apt or Synaptic. That fixed the linking problem for me and one other user. I have no idea if the same problem crops up in other operating systems.

So if you are programming in Java and looking for an open-source LP solver that you can build into a project, check out clp-java and CLP.