Pain and Anguish while Creating an Automated Tool that Recommends Travel Forecasting Methods

I am on an NCHRP panel that is supervising a project to quantify the benefits of various methodologies for statewide travel forecasting.  That project is getting close to finishing, so I wanted to compare our consultant’s findings to whatever may have come out of the recent NCHRP project to create the software tool, TFGuide web app.  This piece of software is documented in NCHRP Report 852 and the software makes recommendations as to what a travel model should look like.  These two NCHRP projects struck me as being awfully ambitious, given the large number of factors, many of them subjective or political, that go into defining a travel forecasting model for an MPO or a state.

Which reminds me of my own fits and starts to create the Tools Selection Matrix for NCHRP Report 765.

The Tools Selection Matrix did not make it into the main body of NCHRP Report 765, but was relegated to an appendix.  However, it was rescued for the Hawaii report and it now appears on TFResource.org, see http://tfresource.org/Choice_of_techniques_in_project-level_traffic_forecasts.  This matrix automates the selection of project-level traffic forecasting techniques.

A big problem with the Tools Selection Matrix was its flatness. There was no easy way to deal with the many dimensions of the selection process on a spreadsheet.  If you were to peek at the matrix you would see its six principal input dimensions:  applications; geography; forecast output requirements; time horizons; budget; and technical resources.  The dimensions interact with each other in a myriad of ways, which could not be depicted on a plane.  Although I struggled mightily to come up with a simple procedure to implement the matrix, its full potential could not be achieved without imparting a great deal of outside expertise.

Perhaps the Matrix’s most serious drawback it that it makes no concessions to budget, training, traditions, or software availability.  That was intentional.  If the Matrix recommends DTA but you are most comfortable doing time series, that’s your problem.  You cannot force the Matrix to justify your current methods if those methods are not ideal for the application.

Which reminds me of the several peer reviews I have participated in, where I was just one of many experts at each.

None of those peer reviews relied on anything special beyond our own knowledge and experience.  No checklists, no spreadsheets, no expert system software.  Agencies were asked to document their needs, constraints, and existing tools.  We listened and we reacted.  And we argued among ourselves a little.  The number of recommendations were impressive in all cases, and those recommendations were far more than stock responses.  In the end, there was always general agreement about big-picture recommendations.  Perhaps the peer-review panelists were chosen for compatibility beyond the nerdiness we all exhibited.  We put our petty personal interests aside.

Spokane was a good example.  At the time, they had a decent regional travel model for long-range planning and conformity, but their forecasting methodology contained some quirky elements.  They had a rather pressing need to improve their ability to assess traffic from site developments and new subdivisions, and these needs had not been previously addressed.  They also wanted to improve their transit and bicycle planning capabilities.  They had a tight budget for model development and a small staff.  See https://www.fhwa.dot.gov/planning/tmip/resources/peer_review_program/srtc/ for details.

In just two hours of discussion we came up with a couple dozen recommendations.  We told them what they should do and they should avoid.  We tried to salvage as much of their existing methodology as we could, including retaining their current software platform so as to maintain continuity and to best utilize their training.  However, we told them to implement parts of this software platform that had been ignored, especially to improve assignment and delay estimation.  We wanted them to standardize their model on well-established and validated techniques.  Go to Chapter 5 of the peer-review report and see for yourself the amount of detail we were able to provide.

So I tried to input Spokane’s situation into the TFGuide web app and I really couldn’t get satisfying answers, as compared to the peer review.  I did not have enough input flexibility and TFGuide’s recommendations would not have been particularly helpful in solving Spokane’s biggest issues.  However, TFGuide’s cost estimates seemed ballpark correct.  There may be places where TFGuide would do fine.  I did not see anything misleading in how TFGuide handled Spokane, but the recommendations were just not sufficiently surgical.

I would give both TFGuide and my Tools Selecon Matrix an “E” for effort and an “I” for incomplete, but for different reasons.  I have confidence in the peer-review process, but recognizing that it takes long lead times and a lot of volunteer and consultant help to pull one of these off.  Perhaps it would be worthwhile for someone to go over all of the peer-reviews to date and determine whether there is a manageable superset of inputs and outputs.  What do you think the next steps should be?

For my part, I wouldn’t mind hearing suggestions about how the Tools Selection Matrix could be improved for usability.

Alan Horowitz, Whitefish Bay, February 5, 2018