There is an idea floating around the transportation planning community that drivers are sensitive to the variability in travel time when choosing their routes. This idea suggests that drivers will tend to avoid short routes where the travel times are unpredictable and may instead choose longer routes where the travel times are consistent from day to day. Some empirical studies say that drivers value 2 minutes of standard deviation of travel time as being equivalent to 1 minute of average travel time. So for example, these three trips would be equivalent in a driver’s estimation:

10 minutes ± 8 minutes

12 minutes ± 4 minutes

14 minutes every time

Travel time variability can come from a number of sources: incidents; spot congestion; and luck. However, studies have shown that travel time variability is related to either the volume-to-capacity ratio or, alternatively, the amount of delay that would be occurring day-to-day.

There is a travel modeler working for the Ohio Department of Transportation, Sam Granato, who has been bugging me for years to include travel time reliability into QRS II. I had been resistive of Sam’s requests because the academic literature has been unclear as to the best method of building paths with reliability information. I found clumsy “heuristics” that would be very difficult to incorporate into QRS II and still retain all the other features that we have grown to like. About a year ago, I started getting a little more serious about travel time reliability after an e-mail exchange with a University of Texas at Austin professor, Kara Kockleman. Kara was building her own sketch planning modeling platform for the Texas Department of Transportation and wanted to know if I could give her any advice. The problem, as I outlined for Kara, is that “the sum of standard deviations is not the standard deviations of the sum”. Our path building algorithms, all variations of the famous algorithm by Dijkstra, rely on the concept that a path’s impedance is the sum of all the node and link impedances in the path. Any obvious modification of Dijkstra’s algorithm to include reliability would not necessarily find the shortest paths through the network. I doubt that a lack of guaranteed shortest paths would bother many planners, but it would really bother network theorists and it would likely kill any chances that I could ever publish a paper on the subject in an academic journal.

Another complication is that any path building algorithm must also be behaviorally realistic. Do people choose their trip’s path by comparing all possible paths before embarking or do people switch paths en route if they perceive that the next link or two in their chosen path may be unreliable? I don’t know the answer to this question and the answer can be rather important to how paths are built in the software.

A practical consideration is computation time. Some of the published heuristic algorithms would take a rather long time to complete on a large network, so I would need something that runs fairly quickly. Path building has become a time consuming part of most simulations, and modelers could become impatient when running dynamic and multiclass traffic assignments.

Given all of my concerns, it wasn’t until after I started beta testing QRS II 8 that I wrote all the code necessary to implement travel time reliability. Everything fell into place rather quickly and Sam Granato was ready and willing to give the new feature a workout. You can read about Sam’s foray into reliability at this link: http://www.trbappcon.org/2013conf/presentations/101_Granato_TRB2013planapps_traveltimereliability.pptx.

There are essentially two ways to do path building in QRS II with reliability information, with and without path corrections. Let’s start with the without case (path building iterations = 1, no corrections), which is easiest to understand. QRS II first calculates a reliability term for each link in the path, where node reliability is also included in the approach link. The reliability term is in units of minutes of extra time. The reliability term is the standard deviation of the travel time multiplied by the “impedance to standard deviation ratio” or “the reliability ratio”. This reliability ratio would be 0.5 for the examples in the first paragraph of this essay. QRS II builds paths in the usual way, but uses the reliability term as part of the link impedance. At the same time, QRS II accumulates the path variance (the variance of the path travel time is the square of the standard deviation of path travel time). The accumulation of path variance is a bit messy because link travel times may be correlated with one another. So there is a need to consider that correlation. QRS II makes the moderately simplifying assumption that only adjacent links, not separated by a turn, can be correlated, which is not a bad approximation of reality. QRS II can then determine the standard deviation of the path and the true reliability of the whole path. After a path has been determined, QRS II calculates the “path error”, which is the difference between the sum of all the link reliability terms and the true path reliability. The path impedance is then adjusted to reflect the true path impedance before passing that impedance on to other steps in the modeling sequence.

The paths found by this method are not likely to be the minimum paths, but the algorithm makes intuitive sense. Each link, one at a time, is chosen to be in the path based on its total impedance, that is, its true travel time plus its reliability, which is a constant fraction of the travel time standard deviation.

Network theorists are groaning. Equilibrium conditions on the network require shortest paths and this algorithm isn’t going to find shortest paths. One suggestion in the literature is to build lots of different paths between an origin and a destination, then compare the paths to each other to find the shortest. If you are lucky or persistent, you might find the shortest path this way. That’s all well and good if you are running one of those tiny test networks that network theorists love, such as the ridiculously simplistic “Sioux Falls” network. QRS II users are much more demanding about network realism and size.

The idea of reliability with path correction (the other option) is to find the shortest path between an origin and a destination, such that the total path impedance is minimized, where the total path impedance is the sum of the path travel time and the weighted (e.g., 0.5) path standard deviation. The trick is to find a surrogate impedance for each link that, when summed over all links, adds up to the path impedance. This adding-up requirement of the surrogates is needed if we want to use our modified Dijkstra algorithm. When I started working on this problem I was not sure whether I could find such a surrogate that made any sense. (As an aside, the traditional Dijkstra algorithm builds trees, but QRS II actually builds vines; however, there are similarities in the way this is done.)

One strategy (there are probably others) is that each link’s surrogate impedance should be set to the marginal increase in path impedance if that link is next selected for the shortest path. A problem with this strategy is that we would need to know the exact path in order to calculate the marginal increases. Thus, we must iterate.

Find shortest paths with raw impedances

Calculate marginal increases

Find shortest paths with new reliabilities

Calculate even newer marginal increases

Etc.

The iterations stop when the path error is becomes zero or the iteration counter has reached its maximum. An interesting aspect of this algorithm is that it will converge in a finite number of steps, if it is going to converge at all. So far, it has not failed to converge for me over tens of millions of paths, but I must still label it as a heuristic. I will leave it to the network theorists to determine whether this algorithm works 100.000000% of the time.

The formula for the standard deviation of a link’s travel time is not my own, but adopted from work by Black, Fearon, Gilliam and others. This particular formula makes intuitive sense, if you have some familiarity with statistics, and it is relatively easy to calculate within the software. One of the inputs to this formula is a free travel time, which is different from the free travel time entered on each link because it also contains some intersection delay. You may notice a new report file after some QRS II runs, OutputFreeFlow.dta, that contains those specially-calculated free travel times. Getting QRS II to calculate free travel times at intersections was not at all simple, so I limited QRS II users to only the latest HCM procedures.

We haven’t had much experience with reliability impedances to date, so I did not provide default values for the Black, et al. formula. Perhaps you could try Sam’s parameters for Parkersburg if you are from a lightly congested community. At this point I would recommend field studies to set the parameters in most places. If you want to see how all this works, you can always try the parameters that are seen in the graphic in the QRS II Reference Manual, which are simply rounded values from Black’s and his friend’s studies. They did most of their work on arterial data. For what it is worth, both Sam and I believe that the exponent on distance, D, should be almost zero for urban streets, because all variation in delay would occur at a single point on the road and distance is mostly irrelevant.

*Alan Horowitz, Whitefish Bay, March 31, 2011; revised April 7, 2011, revised March 3, 2016*