Wednesday, July 31, 2013

Benders Decomposition with Integer Subproblems

I'm not sure why, but occasionally people post questions related to an attempt to apply Benders decomposition in a situation where the subproblem contains integer variables. A key question is how you generate cuts from the subproblem, since discrete problems do not enjoy the same duality theory that continuous problems do.

A typical application of Benders decomposition to integer programming starts with a problem of the form\[ \begin{array}{lrclcc} \textrm{minimize} & c_{1}'x & + & c_{2}'y\\ \textrm{subject to} & Ax & & & \ge & a\\ & & & By & \ge & b\\ & Dx & + & Ey & \ge & d\\ & x & \in & \mathbb{Z}^{m}_+\\ & y & \in & \mathbb{R}^{n}_+ \end{array} \]This decomposes into a master problem\[ \begin{array}{lrclcc} \textrm{minimize} & c_{1}'x & + & z\\ \textrm{subject to} & Ax & & & \ge & a\\ & h'x & & & \ge & h_0 & \forall (h,h_0)\in \mathcal{F}\\ & h'x & + & z & \ge & h_0 & \forall (h, h_0)\in \mathcal{O}\\ & x & \in & \mathbb{Z}^{m}_+ \\ & z & \ge & 0 \end{array} \]and a subproblem\[ \begin{array}{lrclcc} \textrm{minimize} &  c_{2}'y\\ \textrm{subject to} & By & \ge & b\\ & Ey & \ge & d - Dx\\ & y & \in & \mathbb{R}^{n}_+ \end{array} \]where $\mathcal{F}$ and $\mathcal{O}$ are  sets of coefficient vectors for "feasibility" cuts (pushing $x$ in directions that make the solution $(x,y)$ feasible) and "optimality" cuts (pushing $z$ upward so as not to underestimate $c_2'y$) respectively. The subproblem is a linear program, and its dual solution supplies the coefficient vectors $(h,h_0)$ for both types of master cuts.

So what happens if $y$ is integer-valued ($y\in\mathbb{Z}^n_+$) rather than continuous ($y\in\mathbb{R}^n_+$)? I don't have a definitive answer, but there are a few things that can be tried. The following suggestions should also work equally well (or poorly) when $y$ is a mix of integer and continuous variables.

Proceed as usual

The subproblem is now an integer program, but you can always relax it to a linear program and obtain the dual solution to the relaxation. If the current master solution $x = \hat{x}$, $z = \hat{z}$ makes the linear relaxation of the subproblem infeasible, you can be sure it also makes the actual subproblem infeasible, and thus you will get a legitimate feasibility cut. If the subproblem is feasible, the dual to the relaxation will produce a cut that forces $z$ to be at least as great as the objective value of the relaxation, which is a legitimate lower bound for the actual subproblem objective value.

The news here is not all good, though. It is possible that $\hat{x}$ makes the subproblem integer-infeasible but with a feasible relaxation, in which case you will not get the feasibility cut you need. If the subproblem is feasible (let's say with optimal solution $\hat{y}$) but $\hat{z}$ underestimates its objective value $c_2'\hat{y}$, you want an optimality cut that forces $z\ge c_2'\hat{y}$ when $x=\hat{x}$; but the cut you get forces $z\ge w$ where $w$ is a lower bound for $c_2'\hat{y}$, and so there is the possibility that $c_2'\hat{y} > \hat{z} \ge w$ and the optimality cut accomplishes nothing.

"No good" constraints for infeasibility

Suppose that $x$ consists exclusively of binary variables. (General integer variables can always be converted to binary variables, although it's not clear that the conversion is in general desirable.) We can exclude a particular solution $x=\hat{x}$ with a "no good" constraint that forces at least one of the variables to change value:\[\sum_{i : \hat{x}_i=0} x_i + \sum_{i : \hat{x}_i = 1} (1-x_i)\ge 1.\]This gives us another option for feasibility cuts. Solve the subproblem as an IP (without relaxation); if the subproblem is infeasible, add a "no good" cut to the master problem. Note that "no good" cuts are generally not as deep as regular Benders feasibility cuts -- the latter may cut off multiple integer solutions to the master problem, whereas a "no good" cut only cuts off a single solution.

If a "no good" cut eliminates just one solution, is it worth the bother? After all, the node that produced $x=\hat{x}$ will be pruned once we realize the subproblem is infeasible. The answer depends on a combination of factors (and essentially reduces to "try it and see"). First, if $x=\hat{x}$ was produced by a heuristic, rather than as the integer-feasible solution to the node LP problem, then you likely cannot prune the current node (and, in fact, the node you would want to prune may be elsewhere in the tree). Adding the "no good" cut may prevent your ever visiting that node, and at minimum will result in the node being pruned as soon as you visit it, without having to solve the subproblem there. Second, if your master problem suffers from symmetry, the same solution $x=\hat{x}$ may lurk in more than one part of the search tree. The "no good" cut prevents your tripping over it multiple times.

It may be possible to strengthen the cut a bit. Suppose that $\hat{x}$ renders the subproblem infeasible (as an IP). There are various ways to identify a subset of the subproblem (excluding the objective function) that causes infeasibility. CPLEX can do this with its conflict refiner; other solvers may have similar functionality. Let $N$ be the set of indices of the $x$ variables and $N_0$ the set of indices of all $x$ variables that appear in the right hand side of at least one subproblem constraint identified as part of the conflict. If we are lucky, $N_0$ is a proper subset of $N$. We can form a "no good" cut for the master problem using just the variables $x_i, i\in N_0$, rather than all the $x$ variables, and obtain a somewhat deeper cut (one that potentially cuts off multiple master problem solutions). The caveat here is that running something like the CPLEX conflict refiner, after determining that the subproblem is infeasible, may eat up a fair bit of CPU time for questionable reward.

"No good" constraints for optimality

It may be possible to exploit the technique I just described to create ersatz optimality constraints as well. Suppose that the current incumbent solution is $(\tilde{x}, \tilde{y})$, and that some node gives an integer-feasible solution $(\hat{x},\hat{z})$ for the master problem. It must be the case that\[c_1'\hat{x}+\hat{z}<c_1'\tilde{x}+c_2'\tilde{y},\]else the master problem node would be pruned based on bound. Now suppose we pass $\hat{x}$ to the IP subproblem and obtain an optimal solution $y=\hat{y}$. If $c_1'\hat{x}+c_2'\hat{y}<c_1'\tilde{x}+c_2'\tilde{y}$, we have a new incumbent solution. If not, then $x=\hat{x}$ cannot lead to an improved solution, and we can add a "no good" cut to eliminate it (again recognizing that this is a weak constraint).


That pretty much exhausts my quiver. If any readers have other ideas for generating Benders cuts from integer subproblems, I invite you to post them in comments.

Thursday, July 25, 2013

Extracting a Connected Graph

Some optimization problems can be characterized as follows:
  • a graph or digraph, either explicitly stated or implicit, underlies the problem;
  • part of the problem is to select a subgraph by selecting either nodes, edges or both; and
  • the selected subgraph must be connected.
Below I will show one way to model this. What I propose is mathematically valid, but I have no idea whether it is the most efficient approach. One disclaimer: in the case of a directed graph, this method will guarantee weak connectivity but not necessarily strong connectivity.

Let $\mathcal{V}$ be the set of vertices and $\mathcal{E}\subseteq \mathcal{V} \times \mathcal{V}$ the set of edges of the underlying graph. Create binary variables $x_v \in \{0,1\}\,\,\forall v\in\mathcal{V}$ and $y_{(v,w)}\in \{0,1\}\,\,\forall (v,w)\in \mathcal{E}$. They signal respectively the selection of vertex $v$ and edge $(v,w)$ for inclusion in the constructed subgraph. For consistency, add either the constraints\begin{gather*} y_{(v,w)}\le x_{v}\\ y_{(v,w)}\le x_{w} \end{gather*} or the constraints\[ 2y_{(v,w)}\le x_{v}+x_{w} \] for all $(v,w)\in\mathcal{E}$. The former yields a bit tighter model, while the latter yields a bit smaller model. They serve the same purpose: to avoid "dangling" edges (edges not connected to selected vertices at both ends). Also, if the original graph is undirected, we can exploit\[y_{(v,w)}=y_{(w,v)}\,\,\forall (v,w)\in\mathcal{E},\] to effectively halve the number of $y$ variables in the model.

To enforce connectedness, we will run a flow through the selected subgraph (treating edges as bidirectional for flow purposes), siphoning off one unit of flow at each selected vertex. Let $f_{(v,w)}\ge 0\,\,\forall (v,w)\in\mathcal{E}$ denote the flow along edge $(v,w)$. If we intend to siphon off one unit of flow at each vertex, the maximum flow volume we need is $\left|\mathcal{V}\right|$, the cardinality of the original vertex set. So we can limit flow to selected edges with the constraints\[f_{(v,w)}\le\left|\mathcal{V}\right|\left(y_{(v,w)}+y_{(w,v)}\right)\,\,\forall (v,w)\in\mathcal{E}.\]Note that, in the case of a digraph, we allow flow in both directions $v\rightarrow w$ and $w\rightarrow v$ if either of the arcs $(v,w)$ and $(w,v)$ is selected.

Before addressing flow balance constraints, we need to deal with the one tricky part: where does the flow originate? We will postulate a source node with sufficient supply sitting outside the original graph. For $v\in\mathcal{V}$, let $g_v\in\left[0,\left|\mathcal{V}\right|\right]$ denote a flow volume from the source node directly to vertex $v$ ("directly" meaning not passing through any edges of the original graph). We can now express conservation of flow ("flow in equals flow out") as\[g_v + \sum_{(w,v)\in\mathcal{E}} f_{(w,v)} = \sum_{(v,w)\in\mathcal{E}} f_{(v,w)} + x_v.\]If vertex $v$ is not selected ($x_v=0$), then by our previous constraints $y_{(v,w)}=y_{(w,v)}=0\,\,\forall w$, forcing  $f_{(v,w)}=f_{(w,v)}=0\,\,\forall w$, and so this constraint reduces to $g_v=0$ (no external flow into unselected vertices). If vertex $v$ is selected ($x_v=1$), the constraint balances flow after extracting exactly one unit.

There is one thing left to do: restrict our source node to supplying a single (selected) vertex. To do this, we impose an arbitrary total ordering ($\preceq$) on $\mathcal{V}$. This allows us to define, for each $v\in\mathcal{V}$, the set $\mathcal{P}(v)=\{w\in\mathcal{V}\,:\, w\precneqq v\}$ of predecessors of vertex $v$. By adding the constraints\[g_v\le\left|\mathcal{V}\right|\left(1-x_w\right)\,\,\forall w\precneqq v\]we limit the source node to supplying only the first (in the precedence order) selected vertex. Since each of the other selected vertices draws supply from the first selected vertex (call that the "entry vertex") by conservative flows over selected edges between selected vertices, there must be a path in the selected subgraph from the entry vertex to every other selected vertex, meaning the subgraph is connected.

Addendum: The last step (restricting external flow to one vertex) adds $O\left(\left|\mathcal{V}\right|^2\right)$ constraints to the model. If we know a priori that at least one vertex from some (hopefully small) subset $\tilde{\mathcal{V}}\subset\mathcal{V}$ is guaranteed to be included in the subgraph, we can require that flow enter the subgraph through one of those vertices. Define the (arbitrary) ordering on $\mathcal{V}$ to that $\tilde{\mathcal{V}}$ comes first:\[v\prec w \,\,\forall v\in\tilde{\mathcal{V}},w\in\mathcal{V}\backslash\tilde{\mathcal{V}}.\]The last set of constraints then becomes\begin{gather*}g_v\le\left|\mathcal{V}\right|\left(1-x_w\right)\,\,\forall v,w\in\tilde{\mathcal{V}}\,\ni\,w\precneqq v\\g_v = 0\,\,\forall v\in\mathcal{V}\backslash\tilde{\mathcal{V}}\end{gather*}which adds $O\left(\left|\tilde{\mathcal{V}}\right|^2\right)$ constraints (and gets rid of some of the $g$ variables).

Addendum #2: I have a bit of a blind spot when it comes to SOS1 constraints -- I usually only think of them in terms of binary variables, whereas they can in fact involve continuous variables. An alternative to the final $O\left(\left|\mathcal{V}\right|^2\right)$ set of constraints is simply to declare the $g$ variables to be a type 1 specially ordered set. Depending on the solver, that may implicitly create additional binary variables (which I was trying to avoid when I crafted the last set of constraints -- see J-F Puget's comment below and my response), or it may just alter the branching scheme of the solver. To exploit an SOS1 constraint, a solver typically needs "weights" for the variables, to guide its branching decisions. I'm not sure what useful weights for the $g$ variables would be, but my first guess would be to weight $g_v$ by the number of successor nodes to $v$ in my precedence order.

Addendum #3: The restriction to a single entering flow can be done in  $O\left(\left|\mathcal{V}\right|\right)$ rather than $O\left(\left|\mathcal{V}\right|^2\right)$ constraints as follows:\[\sum_{w\in\mathcal{V}\,:\,w\succneqq v}g_w\le\left|\mathcal{V}\right|\left(1-x_v\right)\,\,\forall v\in\mathcal{V}.\]Selecting any node $v$ precludes inbound flows at all successor nodes, so the only possible entry point is the first selected node. I think this formulation has a weaker relaxation than the first one I gave above, but it certainly has fewer constraints.

Monday, July 22, 2013

Bidding on a Pig in a Poke

Not to long ago I set up a profile and a "net" on Zombal, an online marketplace where individuals and companies can bid out scientific projects on which they need help. Someone who needs help "launches" a "zomb" (essentially, posts a Request for Quotation) in which they describe what they need, attach some keywords and specify a budget in "credits" (pegged to the US dollar). Someone offering to provide help "catches" the zomb in a "net" (essentially, receives notification of a keyword match between their areas of interest/expertise and the RFQ) and bids an amount not to exceed the budget. Interested readers can visit their site for more details. (Also, note to +Laura McLay : "Zombs" have nothing to do with zombies.)

I've never bid on a zomb, but I've set up an OR-related net and caught several, and I've observed something interesting about the process. Postings include a button that lets a visitor ask questions of the launcher, presumably before posting a bid. In some cases the launcher may also include an email address for direct questions, although this seems to be the exception rather than the rule. What's interesting to me is that I don't think I've ever seen a question posted (other than one I posted recently). Possibly people are asking relevant questions by email to avoid sharing the information with their competitors, but given the relatively low frequency with which I've seen email addresses for launchers, I'm skeptical. (Also, it seems a bit short-sighted to me as a strategy: public answers might deter a competitor from underestimating the work and underbidding.)

A recent zomb I caught specified that the launcher wanted an automated scheduling system for a school, and gave a few details about context but not much more information. So far, four people have bid, all at 100% of the budget. Here are some (but by no means all) of the questions I'm not seeing on that zomb or any other OR-related zomb I've encountered:
  • What does the launcher want? Sometimes this is fairly clear -- "I need someone to generate control charts from some data"-- but other times it's less clear. Bear in mind that while the problem may have an OR flavor, the launcher may have no idea about OR.
  • What does the launcher really want? For OR clients in general, this is often not the same as what they say they want, which may or may not accurately reflect what they think they want.
  • What does the launcher need? This hopefully resembles the previous answers, but may not be identical. In one instance, I saw someone requesting that a linear regression be run for them, and the nature of the data strongly suggested to me that a nonlinear model would be more appropriate.
  • What are the scope and scale of the problem? In the case of scheduling zomb, the launcher did not specify the number of students, instructors, rooms, courses or time slots. It's reasonable for a user to assume that a program that can accommodate 40 students in two rooms with two teachers can equally well accommodate 400 students in 20 rooms with 20 teachers, but I suspect most people reading this post will be familiar with the phrase "combinatorial explosion". That's a question of scale. On the scope side, does the model need to assign instructors to courses, or is that already given, and so on.
  • What resources does the launcher have to implement a solution? Will the launcher be able to afford a CPLEX license? Do they already have Gurobi installed?
  • Who is responsible for coding the solution? (This assumes that coding is involved, which in OR is very often the case.) If I'm responsible for coding, you can bet the farm that the next question I ask will be ...
  • Who is responsible for maintenance? I've met people who thought they could write and debug code, hand it off and never see it again. Even if the code is totally bug-free (limited to a few variants of the "Hello World" program), somewhere down the road a system upgrade may very well break it. (If you don't believe me, go find someone responsible for programs written in Python 2.x that do integer division, and ask them about Python 3.x ... but not if you're offended by strong language.)
  • How much documentation/training is needed? I've used undocumented or poorly documented software whose operation was crystal clear to the authors (hence the lack of documentation). Generally speaking, the authors' minds have worked considerably differently than mine, possibly because I've avoided recreational drugs.
As I said, I make no claim that this list is exhaustive. I bet that many (most?) professional consultants have a checklist they run through before responding to RFQs. Some catchers may be asking some of these questions via email for some OR-related zombs, but I strongly suspect that many bidders are bidding on a pig in a poke.

Saturday, July 6, 2013

Integer Variables and Quadratic Terms

This is a sequel to a previous post,"Binary Variables and Quadratic Terms", in which I described how to linearize the product of a binary variable and a general variable (possibly discrete, possibly continuous). Here I'll deal with linearizing $z=xy$ where $x\in X$, $X$ a discrete set, and $y$ is an arbitrary variable.

To linearize the product, I'll need both variables to bounded. So assume that $L\le y \le U$ and that $N_X=\left|X\right|\lt \infty$. There are a couple of ways (that I know) to proceed, and both involve introducing binary variables.

The more obvious route, suitable when $N_X$ is not large, is to add $N_X$ binary variables $w_a,\, a\in X$ and set\[x=\sum_{a\in X} aw_a,\]along with the SOS1 constraint\[\sum_{a\in X} w_a=1.\]Then\[z=\sum_{a\in X}aw_ay,\]and we can linearize each of the products $w_ay$ as explained in the previous post. More specifically, we set $z=\sum_{a\in X}az_a$ where $z_a=w_ay$ is a new continuous variable, then linearize the expression for $z_a$ (creating, per the previous post, four new constraints for each $z_a$).

The other approach is very similar but usually more efficient in terms of added variables and constraints when $X$ is an interval ($X=\{a, a + 1, \dots, a + N_X-1\}$) and $N_X$ is not small. Rather than creating a binary variable for each value in the domain $X$, we perform a binary expansion of $x$:\[x=a + \sum_{i=0}^{M} 2^iu_i\]with $u_0,\dots,u_{M}\in\{0,1\}$ and $M=\left\lfloor \log_{2}\left(N_{X}\right)\right\rfloor$. Unless $N_X$ happens to be an integral power of 2, we also need the constraint $x\le a+N_X-1$. The product $z=xy$ can now be written\[z=ay+\sum_{i=0}^M 2^iw_i\]where $w_i=u_iy$, and again we linearize each $w_i$ as explained in the prior post.

Friday, July 5, 2013

MythTV: No More Midnight Confessions

With a tip of the hat to the late Rob Grill and The Grass Roots ... (video)

I previously reported finding my Mythbuntu PC, used strictly for recording TV shows, up and running occasionally when no recording was scheduled. Further investigation revealed that phantom wakeups always occurred at midnight UTC, and only when nothing was scheduled. (If a recording was scheduled later in the week, the PC would sleep through midnight UTC without problem.) I've finally identified (and fixed) the problem, and it has nothing to do with MythTV.

The BIOS (a somewhat dated Phoenix Award Workstation BIOS) contains a flag in the power settings labeled "Resume by Alarm", alongside entries "Date (of Month)" and "Time (of Day)". The "Resume by Alarm" setting was enabled, while the "Date (of Month)" and "Time (of Day)" settings were zeroed out. I don't recall enabling the "Resume by Alarm" setting, but the machine originally came with Windows Media Center (home version) installed, and it's possible WMC requires "Resume by Alarm" to be enabled.

The instructions for setting up ACPI wakeup in MythTV mention "fussy" BIOSes for which one might need to disable real-time clock (RTC) alarms in order to let MythTV wake the system. In my case, it was not necessary -- wake-up went fine -- and so I assumed, as it turns out incorrectly, that my BIOS is not "fussy". It also seemed to me (again, incorrectly) that in order to have the system wake from power-off when an alarm went off, one would need "Resume by Alarm" enabled.

The problem is that "Resume by Alarm" and the adjacent date and time features are designed for specifying a recurring alarm (e.g., wake at 8:27 AM on the 20th of each month, until the alarm is cleared) and (here's the kicker) "Date (of Month)" equal zero means not that there is not scheduled alarm but rather that the PC should wake at the specified "Time (of Day)" every day of the month. Since "Time (of Day)" was 00:00 and the BIOS uses UTC, this meant wake at midnight UTC every day, unless a different alarm (for an as yet unrecorded program) was set. When the last scheduled recording was done, the BIOS went back to wake-daily-at-midnight mode.

The fix was simply to disable "Resume by Alarm". Alarms set by MythTV still work, and now when nothing is scheduled, nothing happens.

Google Reader is Gone

... and eWeek published an obituary with the provocative title "Google Reader Deserved to Die: 10 Reasons Why". One reason the author gives is that RSS use is fading. Setting aside the possibility that, to paraphrase Mark Twain, "the report of [RSS's] death was an exaggeration", the author's points generally fall into two categories. One consists of reasons why it was a good business decision from Google's perspective. Since providing the service cost money and it did not seem to generate any significant revenues, it's hard to argue with Google's logic - and rather than bash them for the decision, I'll just thank them for providing the service as long as they did.

The other set of arguments by the author seem to focus on the allegation that most users now get "news" from other sources, and reasons why that (alleged) trend exists and is likely to continue. This may or may not be true -- I'm no expert when it comes to how people get news online -- but the author seems to make the implicit assumption that news feeds are the primary (sole?) reason for using RSS. I use RSS almost exclusively to get "content" feeds: blog posts from non-news blogs; some Twitter feeds (for convenience); questions and answers posted to various non-news forums. I can't say what proportion of the use of Google Reader was from people seeking "news" and what proportion was from people, like me, looking for other content, but I suspect the presumption that it was mostly news gathering may be a reflection of the author's propensities rather than an analysis of usage data.

Related posts (some suggesting alternatives):

Thursday, July 4, 2013

What's Wrong with a Gerontocracy?

The digital version of the June 24, 2013 issue of Time magazine contains a viewpoint column by Grace Wyler titled "Washington is a Gerontocracy". The subtitle adequately conveys her central point:
A 20-something can be the CEO of a billion-dollar company but can't run for the Senate. That doesn't make sense.
Given some of her arguments, I thought an operations research/mathematics perspective might be in order (plus it's a holiday and I have nothing better to do with my time).

First, I should stipulate that I'm actually somewhat in agreement with her. Here, in no particular order, is a partial list of things that a 21 year old US citizen legally can do:
  • consume alcohol (most places, anyway);
  • own property;
  • sign contracts;
  • wed;
  • reproduce;
  • drive an automobile;
  • vote;
  • control a variety of lethal military ordinance;
  • serve (and perhaps die for) his/her country.
Here, per Ms. Wyler, is a list of things that same citizen cannot do:
  • serve in the House of Representatives (minimum age: 25);
  • serve in the Senate (minimum age: 30);
  • serve as President of the United States (minimum age: 35).
Now, one of the truisms of operations research (albeit one not often taught in classrooms) is that when confronted with constraints for a decision, the first question should be something along the lines of "Why do those constraints exist?" It is not uncommon that the actual reason, if you can drill down to it, is something like "Because <insert name of previous or current big-wig> thinks/thought it should be that way" or "Because that's how we've always done it" (since the last time anybody bothered to think about it). With that in mind, I am not ready to dismiss the validity of lower age limits in general and those specified in the Constitution in particular, but I do think it is worth reviewing them and asking whether they are still appropriate.

I also happen to agree that, to some extent,
Capitol Hill could probably take some cues from people who aren’t afraid to move fast and break a few things.
I further agree that
Congress is struggling to keep up, spinning its wheels in a bygone era when people thought the Internet was a “series of tubes.”
Some of their votes on electronic privacy, protection of intellectual property and other technology-related bills have suggested a less than stellar understanding of technology ... but when the current 20-somethings are 40-somethings, that will resolve itself, much as the current "gerontocracy" knows how to drive a car without smacking it with a buggy whip.

Those points made, I have to take issue with some of Ms. Wyler's arguments.
But if any of these wunderkinder want to direct their powerful young minds toward governing the country, they will have to wait a few years. ... [T]here is a serious downside to barring young people from seeking federal office: with the public debate determined and dominated by senior citizens, the country doesn’t get to hear from — and vote for — the interests of young adults[.]
Having earlier touted a few 20-something billionaires who made their money from Internet businesses, Ms. Wyler suddenly seems to have forgotten their online presence (and the online presences of the multitude of non-billionaire 20-somethings who patronize those businesses). Television is also awash in good-looking 20- and 30-somethings, whether on scripted shows or reality shows. If there is one thing young people do not currently lack, it is platforms to make their opinions heard. Furthermore, I'm fairly sure the Constitutional limits on the ages of office-holders do not apply to staff (else the occasional Congressman-page scandal would be reported in the AARP newsletter and not in Time). Our current leaders have ample opportunity to listen to the opinions of their (considerably) younger constituents, whether they exercise those opportunities or not.
The result is that Capitol Hill remains at least a generation behind the rest of the country. In the 113th Congress, elected in 2012, the average age in the House is 57, and the average age in the Senate is 62.
In support of Ms. Wyler's position, using 2012 demographic data from the US Census Bureau, 57 is roughly the 70th percentile of adults (those over age 20) and roughly the 78th percentile of the entire population. So Congress does in fact skew a bit old. On the other hand, if we equate a "generation" to approximately 25 years, the "average" member of Congress is a bit more than a generation older than the 20-somethings and roughly a generation younger than the eldest segment of the population. So if they are "a generation behind the rest of the country", the "rest of the country" must exclude anyone with an AARP membership ... and trust me, you do not want to meddle with us. (Old proverb: Old age and treachery will overcome youth and skill.)
In his book Too Young to Run?: A Proposal for an Age Amendment to the U.S. Constitution, [Pomona College politics professor John] Seery writes that the age restrictions imposed by the Constitution lower the incentive for young adults to participate in what is supposed to be a representative democracy.
This is certainly plausible, the key being that "lower" conveys a direction but not a magnitude. The young adults in question can run for many local, regional and state offices. Where I live, we had a recent college graduate serve on our city council at 24 and serve as mayor well before his thirtieth birthday. I'm not one to advocate for career politicians, but an apprenticeship before taking national office does not seem unduly burdensome. There is something comforting to the notion that the people steering the ship are not out on their maiden voyage.

On to my personal favorite:
New Jersey Senator Frank Lautenberg, who died June 3 at 89, was the 298th Senator to die in office. Of the 22 Senators who have died in office since 1970, 16 were over 70.
This is again presented in the context that the Congress is too old, so presumably the reader is expected to infer that (a) having 16 of the last 22 senators who died in office be over 70 is somehow bad and that (b) electing younger people to Congress would reduce this number.

Electing younger Senators and Representatives would indeed bring the percentage of those expiring in office who are 70+ down, at least marginally, because it would create more opportunities for someone relatively young to die in office. According to the Social Security Administration actuarial tables, though, the average life expectancy for a 25 year old male is 52 years (death at age 77) and the average life expectancy for a 25 year old female is almost 57 years (death around age 82). If our goal is to decrease the percentage of in-office deaths that occur beyond age 70, let's look at some options that would have a more significant impact:
  • elect younger Senators and Representatives with serious, preferably terminal, illnesses;
  • elect younger Senators and Representatives who enjoy participating in extreme (and hazardous) sports;
  • elect younger Senators and Representatives who drive without using seat belts or ride motorcycles without helmets (preferably at high speed in both cases);
  • elect younger Senators and Representatives who are on active duty in war zones;
  • elect younger Senators and Representatives who are active members of street gangs or drug cartels;
  • elect substantially more young Senators and Representatives with strict term limits (so that they cannot grow old in office).
I kind of like the last one; the other suggestions may not be very practical. More to the point, though, is that we need to ask why it is a bad thing that so many of the Senators who died in office were relatively old (or, from my current vantage point, "not particularly young"). If someone is going to die in office, I personally would just as soon have them lead a long and hopefully fulfilling life first. Now if we want to reduce the frequency of deaths in office, as opposed to the average age of those who go out with their boots on, I can think of a few measures (including cutting back on free food and alcohol provided by lobbyists), and electing younger members of Congress (of reasonable health and with reasonably conservative personal habits) would likely lead to a nontrivial reduction in frequency.