The Elusive Concept of Delay in Urban Traffic

I recently read a paper where the authors tried to compute a congestion index from ridesharing data for a city by comparing daytime average speeds to nighttime average speeds. While reading this paper I couldn’t help but to reflect on an experience I had while working at General Motors Research Laboratories decades ago.

One GM Story about Traffic Delay: The Transportation and Urban Analysis Department had a researcher, Glenn Wanttaja, who was tasked with finding the amount of congestion in a typical American city. He chose Milwaukee. Glenn’s plan was to drive specific trips during peak hours and compare them to similar trips in the wee hours of the morning, when there was little traffic. Glenn never exceeded the speed limit, but otherwise tried to mix in with whatever traffic he could find. Glenn drove a heavily instrumented vehicle borrowed from the Traffic Science Department. This vehicle had an on-board data collection system, which took another skilled researcher to operate. Everyone, including me, in Glenn’s department was asked to assist Glenn for a few days each. This was the late 1970s.

Glenn’s trips were all from one George Webb’s to another George Webb’s. Using a George Webb’s for a trip end was brilliant. George Webb’s is a short-order style eatery with dozens of locations in the Milwaukee region. It is open 24-hours a day. Thus, not only could Glenn design a variety of representative trips, but each trip finished at a location with food, coffee, and well-lighted tables for data documentation and review.

Glenn’s research design was very clean, but expensive. I consider it the gold standard for determining delay from actual traffic, since it follows the scientific method. Only one variable changes between compared samples: the time-of-day.

A sequence of trips I made with Glenn contained an important lesson about nonintuitive effects of signalization. We all have learned that traffic is slower when there are more vehicles, and traffic is faster when there are very few vehicles. On a couple days I was driving with Glenn, he made a round trip, during the AM peak period, from downtown Milwaukee to Wauwatosa back to downtown on normally busy arterial streets. Wauwatosa is a near-in suburb, and the strong direction of traffic flow in the AM rush hour was toward downtown Milwaukee. When we got back to our base George Webb’s I discovered that our out-bound trip (downtown to suburb) was much slower than our in-bound trip (suburb to downtown). The result was exactly the same on the second day. It was considerably faster to fight all the commuters heading to downtown than it was to drive leisurely in light traffic to the suburban George Webb’s.

What gives? I will hazard my best guess later.

What is Delay?
Traffic delay is defined by the Highway Capacity Manual as: “Delay resulting from the interaction of vehicles, causing drivers to reduce their speed below the free flow speed.” So we also need the definition of free flow speed, which is defined in the Highway Capacity Manual as: “The average speed of vehicles on a given segment, measured under low-volume conditions, when drivers are free to drive at their desired speed and are not constrained by the presence of other vehicles or downstream traffic control devices (i.e., traffic signals, roundabouts or STOP signs.)”

The HCM also talks about control delay, incident delay, and geometric delay. The four different forms of delay are often separated in the HCM and elsewhere for evaluation purposes, but they become intertwined upon closer inspection.

These definitions are an OK starting point for some analyses, but I still have many questions. How do we conveniently determine from traffic data how much delay a region’s drivers experience? How can we determine the amount of delay from travel models?

Delay can be measured for a facility, a trip, or a whole population of vehicles.

Before calculating delay we must first agree on a method for calculating free flow speed. Was it reasonable for Glenn to travel no faster than the speed limit? The answer is no, if one is trying to fit traffic data to a speed-volume curve. We have all observed that the speed limit, today at least, is well below car drivers’ unfettered desired speed, on average. I have difficulty with the word “desired” when it comes to congestion indexes across whole regions. Should a congestion index reward a city simply because it asks its drivers to behave safely and respectfully? (I have noticed public agencies being very reluctant to report actual speeds that exceed the speed limit or otherwise imply their drivers might be operating a bit unsafely.)

I have even more questions than answers. How does someone do a floating car study when there are no other vehicles around in the middle of the night? Does using a single driver for all trips introduce an investigator bias or does it provide consistency of measurement? Do drivers at night behave the same as drivers during the day?

A Need for Accurate Trip Information: Delay is a personal experience. The typical driver perceives delay at individual spots on the road, but that driver is likely more concerned with the amount of delay over a whole trip. Methods of measuring speed abound. We have traffic detectors (loops and radar), GPS for fleets and in-car navigation systems, and various vehicle re-identification systems. For a congestion index we should favor those technologies that measure speeds over full trips or fairly long trip segments involving a variety of facilities.

There is no reason to believe that trips or trip segments spewing from the chosen data source are closely representative of all car trips. The lack of representativeness is most acute for nighttime trips, which differ greatly from daytime trips in terms of purposes, trip lengths, routes, and location. All of these factors must be meticulously controlled by some sort of sampling procedure. Otherwise, a congestion index could be strongly biased and not comparable between different cities.

A GM Story Revisited: I can give you a highly plausible but not definitive explanation of Glenn’s paradoxical travel times. Martin Bruening led the City of Milwaukee traffic engineering staff from 1956 to 1972. According to the Institute of Transportation Engineers, “Bruening was long an advocate of progressive signal timing. ” It is very likely that the signals on the arterials Glenn drove those two days were well coordinated. In the era before good signalization software, traffic engineers could create excellent, even perfect, progression in a single direction on a road by using a “space-time” diagram. An unfortunate side-effect of this pencil/paper, graphical technique can be relatively poor progression in the opposite direction, especially when block-spacing is irregular. If Bruening wanted to maximize speeds and safety in a corridor, then he would opt for near-perfect progression in the heavily travelled direction and disregard the few drivers going the other way. When Glenn traveled from downtown to Wauwatosa in light traffic on those mornings he hit numerous red lights, but when Glenn traveled from Wauwatosa to downtown, he glided through almost all the signals on the green phases even though he was surrounded by lots of commuters.

How should traffic control devices be treated when determining free speeds? Signals and stop signs inhibit drivers from traveling at their “desired” speeds, but they would not be needed at all if there weren’t any other cars on the road. Please allow me a personal story. Early in my marriage to Shirley and while still living in Los Angeles, Shirley had a medical emergency at 3 AM. I was instructed by her doctor to immediately take her to a hospital that was 30 minutes away in daytime traffic. Traveling at my “desired” speed and pretty much ignoring all stop signs and red lights, I completed the trip in just 10 minutes. Does that mean travel times in Los Angeles consist of 2 parts of delay for each part of free travel time?

To further complicate matters, signals vary their timings across the day and particularly at night when many signals go to flashing mode. To compute a congestion index, should we be using the nighttime signalization for free flow speed or should we use the signalization that exists at the time drivers are actually making their trips? QRS II, my travel forecasting software, does the latter, since it has no knowledge of what might happen at other times of the day. QRS II allows drivers to go at their “desired” speed (perhaps above the speed limit) but inhibits them by whatever delay is inherent in the traffic controls at essentially a zero flow rate.

Closure: There is considerable difficulties involved in calculating a congestion index that simultaneously meets all technical and political requirements. Purveyors of such indexes must adhere to traffic engineering principles, to the extent possible, and be absolutely clear on any assumptions made.

Alan Horowitz, Whitefish Bay, January 22, 2020

Comments 1

  • Point of disclosure: Alan and I have kicked around these ideas for years, both with and without use of variability in modeled travel time. And ask any 10 people in the industry what “free flow” travel speed or time is, and you might get 7 different answers – at least in its measurement if not in the theory behind it.

    For decades, travel modelers have been taught that travel delay depends only on volume within that traffic stream, whereas if you start with the Highway Capacity Manual instead (operations methods, not “planning”), you learn that the primary determinant of delay is how much conflicting volume needs accommodation. So, results that are “counter-intuitive” to the modelers are actually the norm, not the exception – signal time is needed to process opposing left turns and side street traffic, as well as the directional traffic progression schemes that Alan described. Beyond that, not all traffic is alike. In addition to the distinctions between car and truck traffic, some drivers navigate the roadway system more efficiently than others – based on their trip purpose and/or how familiar they are with the route traveled. This is how “mid-day” travel speeds are often found to be lower than during peak periods. Finally, related to the paper Alan was asked to review – and as surely many users of the national NPMRDS and other GPS speed data products have noticed by now, nighttime travel speeds are usually found to be lower not higher than in the daytime (also despite much lower volumes at night). Why? Visibility, for one. And the percentage of trucks in traffic is usually higher at night than in the daytime.

    So, what are good planning-level index values for free-flow and congested travel times (for determining such things as reliability)? Positions have already been staked out for 15th 80th and 95th percentile values, the latter two of which you can see within FHWA’s current Performance Measurement specs. The 15th percentile as being equivalent to the 85th percentile of spot speed values historically used by traffic engineers for setting posted speed limits (and roughly corresponding to one standard deviation above the average of normally distributed values). 95th percentile is considered a consensus value of what roadway travelers expect (i.e. to arrive at destinations within their estimated travel time at least 19 out of every 20 trips), and 80th percentile being a consensus value among roadway system operators as to what is a reasonable expectation of service given the myriad ways operation can be disrupted (weather, crashes, special events, etc.) combined with the response time needed to mitigate these disruptions. (You may have good reason to establish different such benchmarks for your region, but then it’s up to you to defend them.)