Musings on the Paper Refereeing Process in Travel Forecasting

I just recently declined to referee a paper from a journal because the authors so infuriated me by the second paragraph that I decided I could not be objective. Thus, the reason for this blog article.

Another TRB paper evaluation season has come and gone. Some authors are delighted to have gotten into the Annual Meeting and the remainder are angry or disappointed or resigned to their fate.

I consider my role as a referee to be both an obligation and an opportunity. I rarely read a paper from top to bottom, except when refereeing. And, although I will decline a paper well outside my area of expertise, I can learn much from a paper that stretches the limits of my knowledge.

Since journals now often send each referee’s comments back to all referees, I have found that I am just about average in my requirements for publication. On rare occasions I am an outlier (favorable or unfavorable) but most of the time the eventual disposition of the paper is pretty much what I recommend.

I can spend between 2 hours (really clueless paper) and 2 days (really technical paper) on a single paper. Since I do maybe 20 papers each year from all journals, this is often a substantial time commitment and I want the authors to take me seriously, especially when a concerted response to my comments might save a marginal paper. Not so long ago I was the only positive referee on a TRB submission that had some serious issues. However, I required a somewhat different statement of purpose and a much revised set of conclusions that could be properly supported by their analysis – nothing particularly difficult. The authors sent back a long argument as to why they were standing pat on their conclusions. I pulled my support and the paper was rejected.

I cannot tell an author in our field how to write a good paper, but I can tell this same author how to avoid some bad referee reports.

I always look for tension in the purpose of the paper. Research, by nature, is an attempt to push the boundaries of the state-of-the-art or fix certain current practices that are performing badly. Both of these situations require that we leave our comfort zone.

I always look for a comparison to something that has already been accepted as best. I will sometimes be satisfied by a comparison to conventional practice even if it is not the best, but a well-structured comparison is essential.

I am skeptical of anything tested on a toy network or otherwise hypothetical urban form. I have been to Sioux Falls and the “Sioux Falls” network is not Sioux Falls.

An author will definitely put me into a bad mood if the implementation of his/her research requires us to abandon accepted good practice. Articles proposing faster algorithms that can only work by throwing away 30 years of progress will usually get a negative review from me, regardless of the merits of the algorithm itself.

Simulation is not research. Simulation is planning. It is OK to propose new, well-founded, simulation techniques to improve the planning process, in which case the paper may be valuable, but it is not OK to assert that results of simulations are facts.

And stated preference (SP) is simulation. We can use SP to improve planning, but SP studies should never be accepted as fact unless verified by ground (RP, or otherwise) data. There is one particular coefficient, which I have seen used in planning studies, that has been derived from multiple SP studies and has never been verified by a revealed preference study. Scary.

A single case study does not establish a predictable trend. Case studies are some of the most accessible papers, but authors will often over-reach by concluding that their case study is representative.

I especially dislike articles that propose a policy change (which nobody would want to do anyway), and then propose a new algorithm to evaluate that policy change. I feel those authors are wasting a lot of their own time and the time of others in order to satisfy the sole purpose of adding a new line to their resume.

My expectations are not extreme. I don’t expect the authors to have read everything done on the subject, but I do expect cited papers to have been read and understood. If cited papers are ignored, my trust in the research is diminished. I am usually able to shrug off disappointment when I am not cited and I think I should have been.

I try to evaluate the research not the authors. Nonetheless, I do not tolerate sloppiness from people who should know better. My meanest comments are reserved for well-known personalities who are trying to pass-off schlock as something important.

On occasion I get a paper written by people who are not experts in travel forecasting but have a bright idea that they think would help us. These papers are often difficult to evaluate because there might be value in the bright idea once the rookie mistakes are overcome. These papers require me as the referee to make a difficult conceptual leap to determine whether the bright idea would still work when the travel forecasting aspects are suitably upgraded. It’s easy to get these judgment calls wrong.

Unfortunately, many of the motivations for submitting papers for publication are not altruistic: assistant professors need publications for tenure; merit criteria count publications; name recognition can lead to future job prospects and funding; et cetera. It seems that fewer and fewer papers are being submitted for the right reasons; that is, the research is exciting and we want to world to know about it. I hope I am wrong about this last observation.

The large number of papers that are being submitted, presented and published is causing a secondary problem. Assuming that our paper selection process is fair (and I think it mostly is), how do we cut through all the clutter and find those few nuggets that will truly help our field grow and prosper?

Alan Horowitz, Whitefish Bay, October 13, 2015